DETAILED ACTION
This action is responsive to papers filed on 9/25/2025.
Notice of Pre-AIA or AIA Status
The present application is being examined under the pre-AIA first to invent provisions.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 25, 26, 29-36, and 39-44 are rejected under 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding claims 25, 35, these claims recite the limitation “automatically cross-reference a database with an identifier for the first content item to determine whether the audio and visual features have been previously processed”. According to the instant specification, this is not actually what happens at all. Rather, Paragraph 0048 of the instant specification states “Based on this identifier, it is possible to determine whether this particular video had been previously processed”. And then, that same paragraph states “If the video had already been processed previously, the database storing such time-coded metadata file is updated and cross-referenced with the identifier and existing video signature”. Paragraph 0048 is the only paragraph of the instant specification that mentions cross-referencing. There is no teaching in the specification that the cross referencing is used to determine whether the features have been previously processed. Therefore, the above limitation is considered impermissible new matter and should be removed in the next reply to this Office Action. In the interest of applying prior art, this limitation will be interpreted in light of the specification All dependent claims inherit this rejection through dependency from claims 25 and 35.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 25, 26, 29-36, and 39-44 are rejected under 35 U.S.C. 101 because, while the claims herein are directed to a method and/or system, which could be classified under one of the listed statutory classifications (i.e., 2019 Revised Patent Subject Matter Eligibility Guidance (hereinafter “PEG”) “PEG” Step 1=Yes), the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Regarding claims 25 and 35, the claims recite, in part, extracting audio and visual features from each time-coded segment of a plurality of segments of a first content item, wherein the extracting is performed automatically using audio and video processing techniques, and wherein extracted audio and visual features include time-coded locations corresponding to one or more segments of the first content item; automatically cross-referencing with an identifier for the content item to determine whether the audio and visual features have been previously processed; saving the extracted audio and visual features in a new time-based metadata file corresponding to the first content item; determining an insertion location, wherein the determining of the insertion location is based on analysis of the time-based metadata file; transmitting the first content item; selecting a portion of the time-based metadata file to transmit to a content distributor based on the determined insertion location; transmitting the selected portion of the time-based metadata file to a content distributor that selects a second content item for insertion based at least in part on the selected portion of the time-based metadata file; receiving the second content item from the content distributor; and transmitting the second content item and the time-based metadata file.
The limitations, as drafted and detailed above, is directed towards extracting features from content items, updating metadata to associate the features, and transmitting content of a commercial nature based on the metadata, which falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas, and more specifically commercial interactions. Further, the claims are analogous to cases such as Electric Power Group (Collecting information, analyzing it, and displaying certain results of the collection and analysis) and Int. Ventures v. Capital Financial (Collecting, displaying, and manipulating data). Accordingly, the claim recites an abstract idea (i.e. “PEG” Revised Step 2A Prong One=Yes).
This judicial exception is not integrated into a practical application. In particular, the claims only recite the additional elements of server (claim 25), database (claims 25, 35, simple storage), display device (claims 25, 35), video provider (claim 35), input/output circuitry (claim 35). The additional technical elements above are recited at a high-level of generality (i.e. as a generic processor performing a generic computer function of extracting, cross-referencing, saving, determining, transmitting, selecting, and receiving) such that it amounts to no more than mere instructions to apply the exception using a generic computer component. There are no additional functional limitations to be considered under prong two.
Accordingly, the additional technical elements above do not integrate the abstract idea/judicial exception into a practical application because it does not impose any meaningful limits on practicing the abstract idea. More specifically, the additional elements fail to include (1) improvements to the functioning of a computer or to any other technology or technical field (see MPEP 2106.05(a)), (2) applying or using a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition (see Vanda memo), (3) applying the judicial exception with, or by use of, a particular machine (see MPEP 2106.05(b)), (4) effecting a transformation or reduction of a particular article to a different state or thing (see MPEP 2106.05(c)), or (5) applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception (see MPEP 2106.05(e) and Vanda memo).
Rather, the limitations merely add the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)), or generally link the use of the
judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)). Thus, the claim is “directed to” an abstract idea (i.e. “PEG” Revised Step 2A Prong Two=Yes).
When considering Step 2B of the Alice/Mayo test, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims do not amount to significantly more than the abstract idea.
More specifically, as discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using server (claim 25), database (claims 25, 35, simple storage), display device (claims 25, 35), video provider (claim 35), input/output circuitry (claim 35) to perform the claimed functions amounts to no more than mere instructions to apply the exception using a generic computer component.
“Generic computer implementation” is insufficient to transform a patent-ineligible abstract idea into a patent-eligible invention (See Affinity Labs, _F.3d_, 120 U.S.P.Q.2d 1201 (Fed. Cir. 2016), citing Alice, 134 S. Ct. at 2352, 2357) and more generally, “simply appending conventional steps specified at a high level of generality” to an abstract idea does not make that idea patentable (See Affinity Labs, _F.3d_, 120 U.S.P.Q.2d 1201 (Fed. Cir. 2016), citing Mayo, 132 S. Ct. at 1300). Moreover, “the use of generic computer elements like a microprocessor or user interface do not alone transform an otherwise abstract idea into patent-eligible subject matter (See FairWarning, 120 U.S.P.Q.2d. 1293, citing DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1256 (Fed. Cir. 2014)). As such, the additional elements of the claim do not add a meaningful limitation to the abstract idea because they would be generic computer functions in any computer implementation. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of the computer or improves any other technology. Their collective functions merely provide generic computer implementation.
The Examiner notes simply implementing an abstract concept on a computer, without meaningful limitations to that concept, does not transform a patent-ineligible claim into a patent- eligible one (See Accenture, 728 F.3d 1336, 108 U.S.P.Q.2d 1173 (Fed. Cir. 2013), citing Bancorp, 687 F.3d at 1280), limiting the application of an abstract idea to one field of use does not necessarily guard against preempting all uses of the abstract idea (See Accenture, 728 F.3d 1336, 108 U.S.P.Q.2d 1173 (Fed. Cir. 2013), citing Bilski, 130 S. Ct. at 3231), and further the prohibition against patenting an abstract principle “cannot be circumvented by attempting to limit the use of the [principle] to a particular technological environment” (See Accenture, 728 F.3d 1336, 108 U.S.P.Q.2d 1173 (Fed. Cir. 2013), citing Flook, 437 U.S. at 584), and finally merely limiting the field of use of the abstract idea to a particular existing technological environment does not render the claims any less abstract (See Affinity Labs, _F.3d_, 120 U.S.P.Q.2d 1201 (Fed. Cir. 2016), citing Alice, 134 S. Ct. at 2358; Mayo, 132 S. Ct. at 1294; Bilski v. Kappos, 561 U.S. 593, 612 (2010); Content Extraction & Transmission LLC v. Wells Fargo Bank, Nat' l Ass' n, 776 F.3d 1343, 1348 (Fed. Cir. 2014); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355 (Fed. Cir. 2014).
Applicant herein only requires a general purpose computer (see Applicant specification Paragraphs 0031-0033); therefore, there does not appear to be any alteration or modification to the generic activities indicated, and they are also therefore recognized as insignificant activity with respect to eligibility.
The dependent claims 26, 29-34, 36, and 39-44 appear to merely limit transmitting metadata associated with an object, using “audio processing” to extract audio features, specifics of the audio features, using “video processing” to extract video features, specifics of the video features, determining if content has been previously processed and cross-referencing with a video signature, and selecting relevant ads based on audio and video features, and therefore only limit the application of the idea, and not add significantly more than the idea (i.e. “PEG” Step 2B=No).
The server (claim 25), database (claims 25, 35, simple storage), display device (claims 25, 35), video provider (claim 35), input/output circuitry (claim 35) are each functional generic computer components that perform the generic functions of extracting, cross-referencing, saving, determining, transmitting, selecting, and receiving, all common to electronics and computer systems.
Applicant's specification does not provide any indication that the server (claim 25), database (claims 25, 35, simple storage), display device (claims 25, 35), video provider (claim 35), input/output circuitry (claim 35) are anything other than generic, off-the-shelf computer components. Therefore, the claims do not amount to significantly more than the abstract idea (i.e. “PEG” Step 2B=No).
Thus, based on the detailed analysis above, claims 25, 26, 29-36, and 39-44 are not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 25, 26, 29-36, and 39-44 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Houh (U.S. Pub No. 2007/0106760) in view of Pike (CA 2,543,941), and further in view of Martinez (U.S. Pub No. 2007/0083537).
Regarding claims 25, 35, Houh teaches extracting, by a server, audio and visual features from each segment of a plurality of segments of a content item (Figure 1 Reference 10, Paragraphs 0005, 0012, 0017, 0034, 0036-0042, 0087); wherein the extracting is performed automatically using audio and video processing techniques, and wherein extracted audio and visual features include time-coded locations corresponding to one or more segments of the first content item (Paragraphs 0005, 0012, 0017, 0034, 0036-0042, 0087); saving, by the server, the extracted audio and visual features in a new time-based metadata file corresponding to the first content item (Paragraphs 0005, 0034, 0036, 0041--0047, 0091); storing, at the server, time-based metadata associated with the content item in the database with the audio and visual features of each segment of the plurality of segments of the content item (Paragraphs 0005, 0034, 0036, 0041--0047, 0091, enhanced metadata is “time-based metadata”, enhanced metadata repository is a “database”); determining, by the server, an insertion location, wherein the determining of the insertion location is based on the analysis of the time-based metadata file (Paragraphs 0006-0007, 0013-0016, 0082-0084, 0088-0103, temporal location); transmitting, by the server, the content item to a display device (Paragraphs 0090-0098, Figure 8 Reference 918 is a “display device”); selecting a portion of the time-based metadata to transmit to a content distributor based on the determined insertion location (Paragraphs 0006-0007, 0013-0016, 0082-0084, 0088-0103, temporal location). transmitting, by the server, a portion of time-based metadata corresponding to a first segment of the plurality of segments to an content distributor that selects a second content based at least in part on the portion of time-based metadata corresponding to the first segment (Paragraphs 0091-0103, Figure 8A Reference 924 is a “content distributor”); receiving, by the server, the second content from the content distributor (Paragraphs 0006-0007, 0013-0016, 0082-0084, 0088-0103); and transmitting, by the server, the second content to the display device and the time-based metadata (Paragraphs 0006-0007, 0013-0016, 0082-0084, 0088-0103).
Houh does not appear to specify cross-referencing, by the server, a database with an identifier for a content item. However, Pike teaches automatically cross-referencing, by the server, a database with an identifier for a content item (Paragraphs 0045-0048). It would have been obvious to one having ordinary skill in the art at the time the invention was made to cross-reference and save existing metadata files since the claimed invention is merely a combination of old elements and the combination of each element merely would have performed the same function as it did separately and a person of ordinary skill in the art would have recognized that the results of the combination were predictable.
Houh and Pike do not appear to specify in response to determining that the audio and video features have been previously processed, updating an existing time-based metadata file associated with the video program; and in response to determining that the audio and video features have not been previously processed, going through an initial processing of the audio and video features. However, Martinez teaches specify in response to determining that features of a media item have been previously processed, updating an existing time-based metadata file associated with the video program; and in response to determining that the features of a media item have not been previously processed, going through an initial processing of the audio and video features (Paragraphs 0037, 0055, a database entry needs to be initially set up, but if a database entry already exists and there's new information, the database entry is altered). It would have been obvious to one having ordinary skill in the art at the time the invention was made to be able to update database entries as new information is obtained as opposed to creating a new file each time as this will confuse and clutter relevant information and updating of entries to include new information will cause the system to run smoother with more compact storage of data.
Regarding claims 26, 36, Houh teaches determining an object in a first segment of the first content item corresponding to the selected portion of the time-based metadata file; transmitting, by the server, metadata associated with the object (Paragraph 0045).
Regarding claims 29, 39, Houh teaches identifying, by the server, extracted audio features from the content item using audio processing, wherein the extracted audio features comprise time-code locations of each time-coded segment of the plurality of segments within the content item (Paragraphs 0005, 0012, 0017, 0034, 0036-0042, 0087).
Regarding claims 30, 40, Houh teaches the extracted audio features include one or more of discrete sounds and background noise (Paragraphs 0005, 0012, 0017, 0034, 0036-0042, 0087).
Regarding claims 31, 41, Houh teaches identifying, by the server, extracted video features from the first content item using video processing, wherein the extracted video features comprise time-coded locations of each segment of the plurality of segments within the content item (Paragraphs 0005, 0012, 0017, 0034, 0036-0042, 0087).
Regarding claims 32, 42, Houh teaches the extracted video features include one or more of actors, characters, animals, objects, geographic locations, background, setting, theme, events, or scenes (Paragraphs 0005, 0012, 0017, 0034, 0036-0042, 0087).
Regarding claims 33, 43, Houh does not appear to specify determining that the content item has been processed previously; in response to determining that the content item has been processed previously, cross-referencing the identifier for the content item and an existing video signature. However, Martinez teaches determining that the content item has been processed previously (new or existing file, which reads on a determination of being previously processed); in response to determining that the content item has been processed previously, cross-referencing the identifier for the content item and an existing video signature (Paragraph 0033, video files naturally have “video signatures”, Paragraphs 0045-0048, cross-referencing and updating). It would have been obvious to one having ordinary skill in the art at the time the invention was made to cross-reference and update existing metadata files since the claimed invention is merely a combination of old elements and the combination of each element merely would have performed the same function as it did separately and a person of ordinary skill in the art would have recognized that the results of the combination were predictable.
Regarding claims 34, 44, Houh teaches the second content is selected, by the content distributor, at least in part based on audio and visual features of a segment of the plurality of the segments (Paragraphs 0005, 0012, 0017, 0034, 0036-0042, 0087).
Response to Arguments
Applicant argues “The transmission of pre-computed time-based metadata to the advertising distributor allows for insertion of ads in locations other than beginning or the end of the video without the need for computationally intensive on-the fly- scene analysis. For at least these reasons, the technique of the pending claims improves and accelerates over-the-network delivery of advertisement data for insertion in appropriate time- code locations in the video”. However, an improvement to ad-insertion techniques is merely an improvement to the abstract idea. In the SAP decision (See SAP America, Inc. v. InvestPic, LLC, 898 F.3d 1161, 1163, 127 USPQ2d 1597, 1599 (Fed. Cir. 2018)), the courts found that an improvement made to the abstract idea is not patent eligible. SAP v. Investpic: Page 2, line 22 through Page 3, line 13 - Even assuming that the algorithms claimed are groundbreaking, innovative or even brilliant, the claims are ineligible because their innovation is an innovation in ineligible subject matter because there are nothing but a series of mathematical algorithms based on selected information and the presentation of the results of those algorithms. Thus, the advance lies entirely in the realm of abstract ideas, with no plausible alleged innovation in the non-abstract application realm. An advance of this nature is ineligible for patenting; and Page 10, lines 18-24 - Even if a process of collecting and analyzing information is limited to particular content, or a particular source, that limitations does not make the collection and analysis other than abstract.
Applicant’s arguments pertaining to the prior art are believed to have been rendered moot in view of the new grounds of rejection above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL BEKERMAN whose telephone number is (571)272-3256. The examiner can normally be reached 9PM-3PM EST M, T, TH, F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, WASEEM ASHRAF can be reached on (571) 270-3948. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL BEKERMAN/ Primary Examiner, Art Unit 3621