DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 05, 2025 has been entered.
Notice to Applicant
The following is a Non-Final Office Action for Application Serial Number: 16/106,817, filed on August 21, 2018. In response to Examiner's Final Office Action dated September 05, 2025, Applicant on October 31, 2025, amended claims 38 and 48. Claims 38-57 are pending in this application and have been rejected below.
Response to Amendment
Applicant's amendments are acknowledged.
Regarding the 35 U.S.C. § 101 rejection, Applicants arguments and amendments have been considered but are insufficient to overcome the rejection.
The 35 U.S.C. § 103 rejections are hereby maintained pursuant to Applicants remarks and amendments to claims 38 and 48.
Response to Arguments
Applicant's Arguments/Remarks filed October 31, 2025 (hereinafter Applicant Remarks) have been fully considered but are not persuasive. Applicant’s Remarks will be addressed herein below in the order in which they appear in the response filed October 31, 2025.
Regarding the 35 U.S.C. § 101 rejection, Applicant reiterates that as stated above in MPEP §2106.04(a) such human activity is specifically described in terms of fundamental economic principles or practices, commercial or legal interactions, and managing personal behavior or relationships or interactions between people. Moreover, Applicant maintains that independent claim 38, as well as independent claim 48, recite none of those enumerated activities and are in fact unrelated to interactions between people. Regarding fundamental economic principles or practices, as explained in MPEP §2106.04(a)(2).II.A: "The courts have used the phrases 'fundamental economic practices' or 'fundamental economic principles' to describe concepts relating to the economy and commerce. Fundamental economic principles or practices include hedging, insurance, and mitigating risks." Consequently, Applicant maintains that the independent claims of the present application do not recite certain methods of organizing human activity or any other judicial exception recited by MPEP § 2106.04(a), Applicant cites limitations of claim 38 (see p. 10-11, Applicant Remarks) and submits the limitations remove currently amended independent claim 38 from the realm of any abstract ideas. Support for the amendments to independent claim 38 included in the foregoing quote may be found at page 11, lines 1-12, page 15, lines 3-13, page 16, lines 3-13, page 22, lines 9-19, page 22, line 21 through page 23, line 5 and page 25, lines 1- 7 of the present application.
Applicant respectfully asserts that independent claim 38, as amended, recites a particular video content modification and delivery pipeline including substantive data and control flows that improve the functioning of a computer-based system. The claimed system requires a specific data acquisition technique, i.e., receiving usage data as periodic telemetry heartbeats during content playback, and computes per-interval engagement using a weighting function applied to signals indicative of consumer distraction and consumer engagement. In addition, the modification process recited by currently amended independent claim 38 is performed by the system under content constraints including preservation of narrative order and scene continuity, and results in the redistribution of ads, the inclusion of alternate content at the timecode interval granularity, or both. The claimed system also produces real-time output by generating and providing, during video content consumption, a modified sequence of timecode intervals for immediate playback as a substitute for the original intervals. Moreover, the disclosed and claimed system engages in closed-loop machine learning by modifying one or more of the weighting function applied to behavioral information being evaluated based on paired first and second engagement levels, and then applies the updated process to other video content.
In response, Examiner respectfully disagrees. Examiner respectfully reminds Applicant claims are evaluated to ensure that the claim itself reflects the disclosed improvement; MPEP 21060.04(d)(1). Examiner emphasizes the pending claims and original disclosure do not fully reflect the methods and techniques Applicant describes above. However, Examiner maintains the aforementioned claim language recites an abstract idea based on organizing human activity. Specifically, limitations that recite managing personal behavior, as well as commercial interaction involving advertising, marketing and/or sales behaviors. Examiner finds the additional elements (i.e., the video content analysis system comprising: a computing platform including a hardware processor and a system memory storing a content assessment software code and machine learning) recited in the claims do not take the claim out of the certain methods of organizing human activity and mental processes groupings; see MPEP 2106.04(a)(2)(III)(C).
Regarding the 35 U.S.C. § 101 rejection, Applicant states the aforementioned features of currently amended independent claim 38 are computer- centric limitations that change how the computer system operates, i.e., how it sequences and serves timecode intervals of video content to a playback device in real-time. Like Enfish (self- referential table improving computer operation) and McRo (specific rules improving computer- generated outputs) previously referenced by Applicant, currently amended independent claim 38 is directed to a specific improvement in computer functionality by enabling real-time reconfiguration and delivery of video content, rather than to a business rule or human interaction scheme. Thus, Applicant respectfully asserts that currently amended independent claim 38 is directed patent eligible subject matter under Step 2A, prong (1).
In response, Examiner respectfully disagrees. Examiner finds the present claims are not comparable to the technical improvements disclosed in Enfish and/or McRO. In regards to Enfish, the claims assert improvements in computer capabilities (i.e., the self-referential table for a computer database, which achieves benefits over conventional databases). Enfish disclosed sufficient support in the specification that the claims were directed to a specific implementation of a solution to a problem in the software arts. In regards to McRO, the claims demonstrated improvements to a specific technological process (i.e., lip synchronization and manipulation of character facial expressions), thus improving computer animation without requiring an artist's constant intermediation with significant support in the specification.
Examiner finds Applicant’s invention aims to solve a business problem— media content assessment—rather than a technological one. As stated above, the pending claims and original disclosure do not fully reflect the methods and techniques Applicant describes above. Examiner maintains the additional elements as currently claimed are used as generic tools to apply the instructions of the abstract idea, which does integrate the abstract idea into a practical application; see MPEP 2106.05(f), requiring the use of software to tailor information and provide it to the user on a generic computer, Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1370-71, 115 USPQ2d 1636, 1642 (Fed. Cir. 2015).
Regarding the 35 U.S.C. § 101 rejection, Applicant submits that even if, arguendo, currently amended independent claim 38 is drawn to a judicial exception (which Applicant does not concede to be so), any such recited judicial exception is integrated into a practical application based on the aforementioned limitations of currently amended independent claim 38, as a result of which an assessment of consumer engagement is meaningfully applied so as to produce an immediate technical effect in a video content delivery pipeline. As recited by currently amended independent claim 38, an assessment based on heartbeat telemetry of usage data directly drives the constraints-based modification that provides a different, modified sequence of timecode intervals, during playback, thereby changing what the playback device renders.
In addition, and as noted above, the recited modification process is constrained by narrative order and scene continuity, and its output is a modified sequence of timecode intervals, constituting a practical application within a video content delivery architecture. Furthermore, the "machine learning" here is not generic, but specifically updates one or more of the parameters that deterministically change future determinations of consumer engagement with video content. This is a closed-loop control improvement to the recited system.
Applicant asserts that the features affirmatively required by currently amended independent claim 38 and described above integrate any purported judicial exception into a practical application that improves the operation of the computer system itself. Thus, for these additional reasons, Applicant respectfully submits that currently amended independent claim 38 is directed to patent eligible subject matter, under Step 2A, prong (2).
In response Examiner respectfully disagrees. Examiner maintains Applicant has not provided a detailed explanation fully supported by the original disclosure showing and/or submitting that the technology used is being improved, there was a technical problem in the technology that the claimed invention solved. Examiner respectfully reminds Applicant, regardless of the complexity and/or granularity, modification of video content based on the assessment of consumer engagement without meaningful limitations within the claims that amount to significantly more than the abstract idea itself is a judicial exception (i.e. abstract idea). Furthermore, Examiner respectfully reminds Applicant, general purpose computer elements/structure, similar to the claimed inventions system, used to apply a judicial exception, by use of instruction implemented on a computer, has not been found by the courts to integrate the abstract idea into a practical application; see MPEP 2106.05(f).
Regarding the 35 U.S.C. § 101 rejection, Applicant states for the sake of completeness, applying the second step of the analysis, Applicant respectfully submits that the elements of currently amended independent claim 38, when considered both individually and as an ordered combination, amount to significantly more than a judicial exception. For example, currently amended independent claim 38 recites an unconventional video content analysis and modification pipeline including heartbeat telemetry ingestion of usage data during content playback, timecode interval scoring via a weighting function on distraction/engagement signals, constraints-based modification resulting in either or both of ad redistribution or inclusion of alternate content, provision, during playback, of a modified time code interval sequence for immediate rendering, and performing machine learning to improve system performance. As a result, Applicant respectfully asserts that the limitations affirmatively required by currently amended independent claim 38 amount to significantly more than a judicial exception under Step 2B.
Thus, for the all of the reasons presented above, Applicant respectfully submits that currently amended independent claim 38 is directed to patentable subject matter. It is noted that independent claim 48 is currently amended to include limitations similar to those recited by currently amended independent claim 38. (See currently amended independent claim 48, above). Consequently, Applicant respectfully asserts that currently amended independent claim 48 is also directed to patentable subject matter for reasons similar to those discussed above. As such, claims 39-47 depending from and further limiting currently amended independent claim 38, and claims 49-57 depending from and further limiting currently amended independent claim 48, are also directed to patentable subject matter. Accordingly, Applicant respectfully requests withdrawal of the present rejection of claims 38-57 under 35 U.S.C. § 101.
In response Examiner respectfully disagrees. Examiner maintains Applicant has not provided a detailed explanation fully supported by the original disclosure showing and/or submitting that the technology used is being improved, there was a technical problem in the technology that the claimed invention solved, or the ordered combinations of the known elements is significantly more than the abstract idea.
Examiner respectfully notes merely confining the abstract idea to a particular technological environment does not establish a practical application. See Guidance, 84 Fed. Reg. at 54. “A claim does not cease to be abstract for section 101 purposes simply because the claim confines the abstract idea to a particular technological environment in order to effectuate a real-world benefit.” In re Mohapatra, 842 F. App’x 635, 638 (Fed. Cir. 2021).
Examiner maintains the claims recite addition elements used as tools to perform the instructions of the abstract idea without disclosing limitations that integrates the abstract idea into a practical application, nor do these elements provide meaningful limitations that transforms the judicial exception into significantly more than the abstract idea itself. Additionally, Examiner notes the general use of a trained machine learning model does not automatically provide meaningful limitations to transform the abstract idea into a practical application. Furthermore, the functions of machine learning disclosed in the claims appear to not be sufficiently supported by the original disclosure and therefore is considered solely used as a tool to perform the instructions of the abstract idea. Applicant has not made any persuasive argument that would alter this analysis. For at least these reasons the claims remain rejected under 35 U.S.C. § 101 as being directed to non-statutory subject matter.
Regarding the 35 U.S.C. § 103 rejection, Applicant rejecting independent claim 38 over Kataria, in view of Grosvenor, and further in view of St. Amant, the Office Action cited many paragraphs of Kataria for purported disclosure of the limitations "apply a modification process to modify the video content for the first consumer based on the plurality of first engagement levels, in real-time while the first consumer is consuming the video content."(See page 21 of the Office Action.) For example, the Office Action cited paragraph [0042] of Kataria for the teaching: "using path analysis of customized, tailored patient education instructional content, and most relevant communications and messages, the system is able to predict longitudinal activation and adherence, and then provide, in real-time, the right solution (healthcare intervention for that condition or that patient)." (Id.)
However, nowhere in Kataria does Kataria disclose, teach, or suggest that the "real-time" provision of the right healthcare intervention is provided while the patient is consuming the tailored patient education instructional content upon which the path analysis is performed, or that the tailored patient education instructional content is generated and provided to the patient in real-time during consumption of other patient education instructional content. (See generally Kataria.) Consequently, Applicant respectfully asserts that Kataria cannot be reasonably interpreted to disclose, teach, or suggest the limitations "apply a modification process to modify the video content for the first consumer based on the plurality of first engagement levels, the modification process being constrained to preserve narrative order and scene continuity while including at least one of: (i) redistribution of advertising across the plurality of timecode intervals, or (ii) selection of alternate content for one or more of the plurality of timecode intervals, to generate a modified video content in real-time while the first consumer is consuming the video content," and "provide the modified video content to the first consumer as a substitute for the video content while the consumer is consuming the video content," affirmatively required by currently amended independent claim 38, and analogously required by currently amended independent claim 48.
As such, the remaining pending claims pending from and further limiting independent claims 38 and 48 respectively, are also distinguishable over the prior art of record (see p. 17-21, Applicant Remarks).
In response Examiner respectfully disagrees. Examiner finds Applicant's amendments lack written description support in order to show the Applicant was in possession of the invention at the time the invention was filed. Specifically, Examiner finds the specification states the automated solution for automating assessment of media content desirability disclosed in the present application may include additional actions related to machine learning (see p. 19, ln. 22 – p. 20, ln. 2), however the specification fails to sufficiently tie the machine learning to the “improve the content assessment software code by performing a machine learning to learn from training data including the first usage data and the second usage data, wherein the machine learning adjusts at least one parameter of the weighting function and wherein the machine learning is configured to learn from an increase or a decrease in each of the plurality of second engagement levels as compared to each corresponding one of the plurality of first engagement levels, as a result of modifying the video content, wherein improving the content assessment software code by performing the machine learning alters the modification process” limitations as presented in the pending claim. Additionally, Examiner finds “the modification process being constrained to preserve narrative order and scene continuity” limitation also likes written support in the specification to demonstrate to one of ordinary skill in the art that the claimed invention achieves such a function. For at least these reasons, Examiner finds, given the broadest reasonable interpretation, the prior art of record is sufficient in teaching the current claim language as presented. For at least these reasons, claims 38-57 remains rejected under 35 U.S.C. § 103 as being unpatentable over the prior art of record.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 38-57 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 38 and 48 recites the limitation “the modification process being constrained to preserve narrative order and scene continuity”. Applicant relies on p. 10, ln 21 – p. 11, ln 12, p. 22, ln. 9-19 and Fig. 3-4 of the original discloser to provide support for the assertion that the modified video content is constrained to preserve narrative order and scene continuity as described. However there is no actual description given and the specification fails to provide support for the concept, because the specification does not provide disclosure in sufficient detail to demonstrate to one of ordinary skill in the art that the claimed invention achieves such a function. For the purpose of examination, Examiner will interpret accordingly.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 38-57 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claims 38 and 48 recites “improv[ing] the content assessment software code by performing a machine learning to learn from training data including the first usage data and the second usage data, wherein the machine learning adjusts at least one parameter of the weighting function and wherein the machine learning is configured to learn from an increase or a decrease in each of the plurality of second engagement levels as compared to each corresponding one of the plurality of first engagement levels, as a result of modifying the video content, wherein improving the content assessment software code by performing the machine learning alters the modification process”. The specification states the automated solution for automating assessment of media content desirability disclosed in the present application may include additional actions related to machine learning (see p. 19, ln. 22 – p. 20, ln. 2). Additionally, a content assessment software code may learn that the modification made in an action failed or was successful (see p. 24, ln. 7-14), using the comparison of the first and second usage data as training data by content assessment software code that may generate key performance indicators (KPIs) that drive the evolution of content assessment software code (p. 24, ln. 15-19), and content assessment software code may be configured to learn from comparison of second usage data with first usage data in order to improve automated assessment of media content desirability in the future (see p. 25, ln. 2-5). However, there is no description linking the machine learning to the content assessment code, nor does the specification provide a description that the machine learning adjusts at least one parameter of the weighting function or is configured to learn from an increase or a decrease in each of the plurality of second engagement levels as compared to each corresponding one of the plurality of first engagement levels, as a result of modifying the video content, wherein improving the content assessment software code by performing the machine learning alters the modification process. The specification fails to provide support for the machine learning limitations, because “the specification does not provide a disclosure of the computer and algorithm in sufficient detail to demonstrate to one of ordinary skill in the art that the inventor possessed the invention including how to program the disclosed computer to perform the claimed function” (MPEP 2161.01 para. 6). For purpose of examination, Examiner will interpret accordingly.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Step 1: The claimed subject matter falls within the four statutory categories of patentable subject matter.
Claims 38-47 are directed towards a system and claims 48-57 are directed towards a non-transitory computer-readable medium, which are among the statutory categories of invention.
Step 2A – Prong One: The claims recite an abstract idea.
Claims 38-57 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite providing modified video content based on user engagement levels.
Claim 38 recites limitations directed to an abstract idea based on certain methods of organizing human activity. Specifically, video content being segregated into a plurality of timecode intervals each corresponding respectively to a scene comprising a plurality of shots of the video content, wherein each of the plurality of shots comprises a sequence of video frames that is captured from a same camera perspective without cinematic transitions including cuts; determine, based on the first session data weighted by a weighting function applied to the first behavioral information indicative of the distraction by the first consumer, a plurality of first engagement levels of the first consumer with the video content, wherein each of the plurality of first engagement levels of the first consumer corresponds to one of the plurality of timecode intervals; apply a modification process to modify the video content for the first consumer based on the plurality of first engagement levels, modification process to modify the video content for the first consumer based on the plurality of first engagement levels, the modification process being constrained to preserve narrative order and scene continuity while including at least one of: (i) redistribution of advertising across the plurality of timecode intervals, or (ii) selection of alternate content for one or more of the plurality of timecode intervals, to generate a modified video content in real-time while the first consumer is consuming the video content; provide the modified video content to the first consumer as a substitute for the video content while the consumer is consuming the video; determine, based on the second session data weighted by the weighting function applied to the second behavioral information, a plurality of second engagement levels of the first consumer with the modified video content; and modify another video content, using the altered modification process constitutes methods based on managing personal behavior or relationships between people, as well as, commercial interactions related to advertising, marking or sales behaviors. The recitation of a video content analysis system comprising: a computing platform including a hardware processor and a system memory storing a content assessment software code and machine learning does not take the claim out of the certain methods of organizing human activity grouping. Thus the claim recites an abstract idea. Claim 48 recites certain method of organizing human activity for similar reasons as claim 38.
Step 2A – Prong Two: The judicial exception is not integrated into a practical application.
The judicial exception is not integrated into a practical application. In particular, claim 38 recites receive first usage data as periodic telemetry heartbeats including first session data describing a use of a video content by a first consumer and first behavioral information indicative of distraction by the first consumer during the use of the video content by the first consumer; and receive second usage data as periodic telemetry heartbeats including session data describing a use of the modified video content by the first consumer and second behavioral information indicative of distraction by the first consumer during the use of the modified video content by the first consumer, which are limitations considered to be an insignificant extra-solution activity of collecting and delivering data; see MPEP 2106.05(g). Additionally, claim 38 recites a video content analysis system comprising: a computing platform including a hardware processor and a system memory storing a content assessment software code at a high-level of generality such that they amount to no more than generic computer components used as tools to apply the instructions of the abstract idea; see MPEP 2106.05(f). Additionally, claim 38 recites improve the content assessment software code by performing a machine learning to learn from training data including the first usage data and the second usage data, wherein the machine learning is configured to learn from an increase or a decrease in each of the plurality of second engagement levels as compared to each corresponding one of the plurality of first engagement levels, as a result of modifying the video content, wherein improving the content assessment software code by performing the machine learning alters the modification process. The general use of a machine learning technique does not provide a meaningful limitation to transform the abstract idea into a practical application. Therefore, the machine learning models disclosed in the claims are solely used as a tool to perform the instructions of the abstract idea. Thus, the additional element do not integrate the abstract idea into practical application because it does not impose any meaningful limitations on practicing the abstract idea. Claim 38 is directed to an abstract idea. The additional elements recited in the method of claim 48 also amounts to no more than mere instructions to apply the exception using a generic computer component; see MPEP 2106.05(f). Thus, the additional elements recited in claim 48 do not integrate the abstract idea into practical application for similar reasons as claim 38.
Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional elements in the claims other than the abstract idea per se, including the video content analysis system comprising: a computing platform including a hardware processor and a system memory storing a content assessment software code amount to no more than a recitation of generic computer elements utilized to perform generic computer functions, such as receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); electronic recordkeeping, Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (updating an activity log) and storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; see MPEP 2106.05(d)(II). (see at least Specification [0015], [0018]). Viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself. Therefore, since there are no limitations in the claim that transform the abstract idea into a patent eligible application such that the claim amounts to significantly more than the abstract idea itself, the claims are rejected under 35 U.S.C. § 101 as being directed to non-statutory subject matter.
§ 101 Analysis of the dependent claims.
Regarding the dependent claims, dependent claims 41, 42, 47, 51, 52 and 57 recites obtaining and providing limitations respectively, which are considered an insignificant extra-solution activities of collecting and delivering data; see MPEP 2106.05(g). Additionally, Claims 39-47 and 49-57 recite steps that further narrow the abstract idea. No additional elements are disclosed in the dependent claims that were not considered in independent claim 38 and 48. Therefore claims 37-47 and 49-57 and do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 38, 44, 47, 48, 54 and 57 are rejected under 35 U.S.C. 103 as being unpatentable over Kataria et al., U.S. Publication No. 2019/0027248 [hereinafter Kataria], in view of Grosvenor, U.S. Publication No. 2005/0031296 [hereinafter Grosvenor], and further in view of St. Amant et al., U.S. Publication No. 2019/0282155 [hereinafter Amant].
Referring to Claim 38, Kataria teaches:
A video content analysis system comprising:
a computing platform including a hardware processor and a system memory storing a content assessment software code (Kataria, [0070]; [0049]; [0052]);
the hardware processor configured to execute the content assessment software code to (Kataria, [0070]):
receive first usage data, as periodic telemetry heartbeats including first session data describing a use of a video content by a first consumer, the video content being segregated into a plurality of timecode intervals each corresponding respectively to a scene comprising a plurality of shots of the video content (Kataria, [0033]), “The machine learning algorithm extracts higher order knowledge about the patient from the data being collected in real time…”; (Kataria, [0046]-[0048]), “the system 104 generates a microlearning library 124 for the patient… This microlearning library can, for example, include videos of fifteen seconds duration to three minutes duration, and/or other materials designed to be consumed within a similar timeframe… Which videos are selected, or which portions of videos are selected, can be determined by the machine learning algorithm… In another configuration, the video repository contains only a single version of each video, however, the video is tagged with metadata indicating distinct portions within the video… patient then watches the videos within the microlearning library 126, and a record of which videos the patient watches 128 is made within the system 104. This data is then used to iteratively update the machine learning algorithm…FIG. 2 illustrates an exemplary user interface 200 for accessing the tailored video library… a ribbon 208 illustrating different sections 202 of education… there are four different sections 202. Within each section, there can be subgroups 204 of videos 206 which are presented to the user”, Examiner considers the videos and portions of videos to teach a scene with a plurality of shots;
determine, based on the first session data, a plurality of first engagement levels of the first consumer with the video content, wherein each of the plurality of first engagement levels of the first consumer corresponds to one of the plurality of timecode intervals (Kataria, [0025]), “passive data from their online engagement of video viewership to create a behavioral and engagement phenotype of the patient”; (Kataria, [0026]), “Content Viewership data. For example, if the patient is a returning patient, how many of the videos assigned to the patient for the patient's education did the patient watch? Did the patient skip any particular videos which are statistically linked to patient behavior or diagnosis”; (Kataria, [0022]), “quantitative behavior measures (specific data points, such as a viewership record”; (Kataria, [0048]);
apply a modification process to modify the video content in for the first consumer based on the plurality of first engagement levels, the modification process being constrained to preserve narrative order and scene continuity while including at least one of: (i) redistribution of advertising across the plurality of timecode intervals, or (ii) selection of alternate content for one or more of the plurality of timecode intervals, to generate a modified video content in real-time while the first consumer is consuming the video content (Kataria, [0036]), “In a preferred configuration, the library of videos would incorporate microlearning, where each video is between 15 seconds and 3 minutes long, where the videos selected (and the order of the videos) are selected based on the specific data associated with that patient. In some configurations, the videos are pre-recorded to specific lengths, then selected for inclusion into the libraries of specific patients. In other configurations, the system can edit a longer video to the desired length and content required for a patient, then place the edited version of the video into the patient's library”; (Kataria, [0063]), “scores can be based on the individual user's responses, both present and past (when available). The various weights can be individually determined based on past behavior of an individual, or can be based on group behaviors. The final score generated, the “Behavioral Profile Score”, can then be used to generate a customized library (612) of videos… these videos can be “microlearning” videos which are selected specifically based on the scores, answers, and behaviors of the user. Similarly, in some cases, these videos can be cut, spliced, or modified based on the scores, answers, and behaviors of the user. Once the user has the custom library available, the system then records user interactions (614) with the library. For example, does the user watch all of the videos? Does the user start the video, then stop it before it concludes? Does the user watch the videos in order?”; (Kataria, [0042]), “using path analysis of customized, tailored patient education instructional content, and most relevant communications and messages, the system is able to predict longitudinal activation and adherence, and then provide, in real-time, the right solution (healthcare intervention for that condition or that patient)”; (Kataria, [0020]; [0031]; [0033]; [0046]; [0067]);
provide the modified video content to the first consumer as a substitute for the video content while the consumer is consuming the video content (Kataria, [0036]), “the system can edit a longer video to the desired length and content required for a patient, then place the edited version of the video into the patient's library”; (Kataria, [0048]), “the patient is provided with a ribbon 208 illustrating different sections 202 of education which the patient needs… Within each section, there can be subgroups 204 of videos 206 which are presented to the user. Each video 206 can be selected, modified, and otherwise edited by the machine learning algorithm as required”; (Kataria, [0063]; [0068]);
receive second usage data as periodic telemetry heartbeats including second session data describing a use of the modified video content by the first consumer (Kataria, [0033]), “The machine learning algorithm extracts higher order knowledge about the patient from the data being collected in real time…”; (Kataria, [0063]-[0064]), “Once the user has the custom library available, the system then records user interactions (614) with the library. For example, does the user watch all of the videos? Does the user start the video, then stop it before it concludes? Does the user watch the videos in order… The recorded interaction information is then analyzed (616), and the system can modify the question bank (618), such that the user (or a different user) could receive distinct questions in the future. In addition, the system can modify the weighted equation algorithm (620), such that which videos, or which portions of videos, would be presented to a similar user in the future can change”; (Kataria, [0068]), “as the patient watches videos within the library of videos: recording content viewership data regarding how patients view the library of videos; generating, by weighting at least: one clinically validated health score, the score of the patient, demographic data associated with the patient, and the content viewership data, an activation score of the patient; allocating additional resources, via a machine learning algorithm, for the patient based on the activation score; receiving data indicating portions of the library of videos watched by the patient, to yield viewing data; receiving online data identifying online behavior of the patient”; (Kataria, [0067]);
determine, based on the second session data, a plurality of second engagement levels of the first consumer with the modified video content (Kataria, [0041]), “With each iteration, an additional regression analysis on the factors can be performed and, when the additional regression analysis indicates a distinct weighting should take place, the factors are adjusted based on the additional regression analysis… The different scores can be regressed on each intervention model, and can be regressed again based on content viewership data variables, to provide a truer picture of the patient's expected activity and adherence level”; (Kataria, [0025]), “passive data from their online engagement of video viewership to create a behavioral and engagement phenotype of the patient”; (Kataria, [0026]), “Content Viewership data. For example, if the patient is a returning patient, how many of the videos assigned to the patient for the patient's education did the patient watch? Did the patient skip any particular videos which are statistically linked to patient behavior or diagnosis”; (Kataria, [0022]), “quantitative behavior measures (specific data points, such as a viewership record”; (Kataria, [0059]), “the system continues to improve itself, providing a better customized library of videos 420 to the user 402 with each iteration/interaction”; (Kataria, [0048]);
improve the content assessment software code by performing a machine learning to learn from training data including the first usage data and the second usage data, wherein the machine learning adjusts at least one parameter of the weighting function and wherein the machine learning is configured to learn from an increase or a decrease in each of the plurality of second engagement levels as compared to each corresponding one of the plurality of first engagement levels, as a result of modifying the video content (Kataria, [0043]), “over time multiple patients diagnosed with a disease may be assigned to view microlearning videos associated with that disease. The machine learning algorithm determines which video segments are assigned to each patient's customized video library, then receives feedback about which videos the patient actually watched and the patient's overall self-care. Based on the improvements to each patient, which videos were seen, etc., the machine learning algorithm can determine which videos are working to improve the patient's health and which are not, then customize videos…based on that feedback”; (Kataria, [0041]; [0047]; [0059]-[0060]);
wherein improving the content assessment software code by performing the machine learning alters the modification process (Kataria, [0060]), “the data bank 404 and the user response storage 422 of FIG. 4 are used as inputs 502 to the machine learning system. The system deploys an A.I. Algorithm/Model 504, which evaluates the current inputs 502 to the system and generates a prediction/output 506. These predictions 506 can be output 508 for use in… modifying the algorithm 418, changing how videos are selected, combined, modified, etc. … a parameter 512 (or more) of the A.I. algorithm/model 504 will be changed, meaning that the actual code being used to evaluate the inputs 502 will be modified according to the parameter 512… the code used to evaluate, learn, and predict the user's behavior is being iteratively modified in a particular way”; (Kataria, [0058]-[0059]);
modify another video content, using the altered modification process (Kataria, [0043]), “over time multiple patients diagnosed with a disease may be assigned to view microlearning videos associated with that disease. The machine learning algorithm determines which video segments are assigned to each patient's customized video library, then receives feedback about which videos the patient actually watched and the patient's overall self-care. Based on the improvements to each patient, which videos were seen, etc., the machine learning algorithm can determine which videos are working to improve the patient's health and which are not, then customize videos for that disease based on that feedback”; (Kataria, [0046]), “iteratively update the machine learning algorithm 130. If, for example, the system 104 identifies that viewers who watch a specific video, or a portion of a video… the system can compare the viewing habits of individuals with their results on a subsequent visit or, if the system has mechanisms in place to record patient behavior during self-care, can make correlations between the individual's behavior and the results, then make self-care recommendations to the patient and other patients based on those correlations”; (Kataria, [0060]).
Kataria teaches videos of fifteen seconds duration to three minutes duration, and/or other materials designed to be consumed within a similar timeframe and portions of videos tagged with metadata (see par. 0046), but Kataria does not explicitly teach:
first usage data including first session data and first behavioral information indicative of distraction by the first consumer during the use of the video content by the first consumer,
wherein each of the plurality of shots comprises a sequence of video frames that is captured from a same camera perspective without cinematic transitions including cuts;
data weighted by a weighting function applied to the first behavioral information indicative of the distraction by the first consumer;
second usage data including second session data and second behavioral information indicative of distraction by the first consumer during the use of the modified video content by the first consumer; and
data weighted by the weighting function applied to the second behavioral information indicative of the distraction by the first consumer.
However Grosvenor teaches:
wherein each of the plurality of shots comprises a sequence of video frames that is captured from a same camera perspective without cinematic transitions including cuts (Grosvenor, [0031]), “it is necessary to capture the continuous video footage it is desired to review with a suitable image capture device and/or retrieve the footage from memory… a wearable video camera which captures several hours of continuous footage”; (Grosvenor, [0034]), “Continuous video footage is divided, at the segmentation stage (1), into a plurality of video segments. Once the video segments have been derived from the video footage, each segment is displayed (2), substantially concurrently, in a display window. There are a plurality of display windows available per video segment so that video segments can be played substantially concurrently and simultaneously be viewed by a user”; (Grosvenor, [0040]-[0041]), “… segmentation stage 1 where a plurality of video segments are derived from the video footage, using the associated relevant meta data… the footage may be actually divided in order to derive the video segments or a number of markers or pointers (hereinafter called "beacons") used to define the start positions of each of the video segments in the continuous (undivided) footage… the video footage may be simply divided chronologically into segments of equal length, using no other criteria. Alternatively, the footage could be divided according to which of several cameras captured it”; (Grosvenor, [0036]; [0064]; [0066]; [0073]).
At the time the invention was filed, it would have been obvious to a person of ordinary skill in the art to have modified the videos and portion of videos in Kataria to include the plurality of shots limitation as taught by Grosvenor. The motivation for doing this would have been to improve the method of generating behavioral profiles based on quantitative behavior measures (see par. 0022) to efficiently include the results of identifying video data of interest (see Grosvenor par. 0071).
Kataria teaches as a user watches and interacts with videos, data about the user behavior is recorded in the user response storage (see par. 0058), but Kataria does not explicitly teach:
first usage data including first session data and first behavioral information indicative of distraction by the first consumer during the use of the video content by the first consumer,
data weighted by a weighting function applied to the first behavioral information indicative of the distraction by the first consumer;
second usage data including second session data and second behavioral information indicative of distraction by the first consumer during the use of the modified video content by the first consumer; and
data weighted by the weighting function applied to the second behavioral information indicative of the distraction by the first consumer.
However Amant teaches:
first usage data including first session data and first behavioral information indicative of distraction by the first consumer during the use of the video content by the first consumer (Amant, [0041]), “repetitively obtain sensor content and/or may repetitively generate behavioral profile content for a particular user. For example, sensor content may be gathered and/or otherwise obtained at regular and/or specified intervals, and/or behavioral profile content may be generated at regular and/or specified intervals. In an embodiment, one or more devices, systems, and/or processes may track behavioral profile content over a period of time, for example, such as to detect changes in behavioral profile content, for example”; (Amant, [0088]), “determine, estimate, and/or infer, for example, one or more parameters representative of a substantially current biological and/or behavioral state of a particular user… behavioral profile content, such as 521, may include a plurality of parameters representative of focal point… focus/distraction…social engagement level… a processor, such as behavioral processing unit 520, may repetitively and/or substantially periodically obtain sensor content and/or may repetitively and/or substantially periodically generate behavioral profile content, such as behavioral profile content 521, for a particular user, such as user 510”; (Amant, [0056]; [0117]; [0155]; [0160]);
data weighted by a weighting function applied to the first behavioral information indicative of the distraction by the first consumer (Amant, [0087]), “behavioral processing unit 520, may include circuitry for determining and/or selecting weighting parameters… may be based, at least in part, on content, such as parameters 515, identifying one or more aspects (e.g., title, genre, content type, such as music, interactive game, video, etc.) of content consumed by a particular user, such as user 510”; (Amant, [0088]-[0089]; [0101]; [0104]; [0108]; [0125]);
second usage data including second session data and second behavioral information indicative of distraction by the first consumer during the use of the modified video content by the first consumer (Amant, [0041]), “repetitively obtain sensor content and/or may repetitively generate behavioral profile content for a particular user. For example, sensor content may be gathered and/or otherwise obtained at regular and/or specified intervals, and/or behavioral profile content may be generated at regular and/or specified intervals. In an embodiment, one or more devices, systems, and/or processes may track behavioral profile content over a period of time, for example, such as to detect changes in behavioral profile content, for example”; (Amant, [0088]), “determine, estimate, and/or infer, for example, one or more parameters representative of a substantially current biological and/or behavioral state of a particular user… behavioral profile content, such as 521, may include a plurality of parameters representative of focal point… focus/distraction…social engagement level… a processor, such as behavioral processing unit 520, may repetitively and/or substantially periodically obtain sensor content and/or may repetitively and/or substantially periodically generate behavioral profile content, such as behavioral profile content 521, for a particular user, such as user 510”; (Amant, [0056]; [0117]; [0155]; [0160]); and
data weighted by the weighting function applied to the second behavioral information indicative of the distraction by the first consumer (Amant, [0087]), “behavioral processing unit 520, may include circuitry for determining and/or selecting weighting parameters… may be based, at least in part, on content, such as parameters 515, identifying one or more aspects (e.g., title, genre, content type, such as music, interactive game, video, etc.) of content consumed by a particular user, such as user 510”; (Amant, [0088]-[0089]), “… a processor, such as behavioral processing unit 520, may repetitively and/or substantially periodically obtain sensor content and/or may repetitively and/or substantially periodically generate behavioral profile content, such as behavioral profile content 521, for a particular user, such as user 510…, a processor, such as behavioral processing unit 520, may determine appropriate weights for various sensor combinations and/or for particular parameters, such as parameters 515, provided by one or more content providers…during online operation, for example, a set of inputs may be logged and/or later used as training parameters… determined and/or substantially known relationships, such as represented by parameters 550, may include relationships between behavioral profile content and/or user states and/or may include scientifically determined relationships…”; (Amant, [0058]; [0101]; [0104]; [0108]; [0125]).
At the time the invention was filed, it would have been obvious to a person of ordinary skill in the art to have modified the recorded user behavior in Kataria to include the user usage data and weighted data limitations as taught by Amant. The motivation for doing this would have been to improve the method of generating behavioral profiles based on quantitative behavior measures (see par. 0022) to efficiently include the results of generating customized content for consumption by the particular user, to track performance changes with respect to a particular user (see Amant par. 0035).
Referring to Claim 44, Kataria in view of Grosvenor in view of Amant teaches the video content analysis system of claim 38. Kataria further teaches:
wherein the first usage data include timecode information corresponding to the use of the video content by the first consumer, and wherein the timecode information includes the plurality of timecode intervals (Kataria, [0046]-[0047), “… This microlearning library can, for example, include videos of fifteen seconds duration to three minutes duration, and/or other materials designed to be consumed within a similar timeframe, which follow the principles of microlearning i.e. embedding cognitive science, behavior science, and realism. Which videos are selected, or which portions of videos are selected, can be determined by the machine learning algorithm. … The patient then watches the videos within the microlearning library 126, and a record of which videos the patient watches 128 is made within the system 104. This data is then used to iteratively update the machine learning algorithm 130…”; (Kataria, Fig. 2, [0048]), “FIG. 2 illustrates an exemplary user interface 200 for accessing the tailored video library. In this example 200, the patient is provided with a ribbon 208 illustrating different sections 202 of education which the patient needs… Within each section, there can be subgroups 204 of videos 206 which are presented to the user”.
Referring to Claim 47, Kataria in view of Grosvenor in view of Amant teaches the video content analysis system of claim 38. Kataria further teaches:
the hardware processor further configured to execute the improved content assessment software code to:
provide the modified another video content to the another consumer user (Kataria, [0043]), “over time multiple patients diagnosed with a disease may be assigned to view microlearning videos associated with that disease. The machine learning algorithm determines which video segments are assigned to each patient's customized video library, then receives feedback about which videos the patient actually watched and the patient's overall self-care. Based on the improvements to each patient, which videos were seen, etc., the machine learning algorithm can determine which videos are working to improve the patient's health and which are not, then customize videos for that disease based on that feedback”; (Kataria, [0047]; [0038]).
Referring to Claim 48, Kataria teaches:
A method for use by a video content analysis system including a computing platform having a hardware processor and a system memory storing a content assessment software code, the method comprising (Kataria, [0070]-[0071]):
Claim 48 disclose substantially the same subject matter as Claim 38, and is rejected using the same rationale as previously set forth.
Claim 54 disclose substantially the same subject matter as Claim 44, and is rejected using the same rationale as previously set forth.
Claim 57 disclose substantially the same subject matter as Claim 47, and is rejected using the same rationale as previously set forth.
Claims 39, 45, 49 and 55 are rejected under 35 U.S.C. 103 as being unpatentable over Kataria et al., U.S. Publication No. 2019/0027248 [hereinafter Kataria], in view of Grosvenor, U.S. Publication No. 2005/0031296 [hereinafter Grosvenor], in view of St. Amant et al., U.S. Publication No. 2019/0282155 [hereinafter Amant], and further in view of Kar et al., U.S. Publication No. 2018/0225710 [hereinafter Kar].
Referring to Claim 39, Kataria in view of Grosvenor in view of Amant teaches the video content analysis system of claim 38. Kataria teaches generating a customized library of videos based on a behavior profile score of a groups behavioral responses (see par. 0063), but Kataria does not explicitly teach:
wherein the hardware processor is further configured to execute the content assessment software code to:
associate the first consumer, using the plurality of first engagement levels, with one of a plurality of predetermined aggregate consumption profiles identified based on usage of the video content by a plurality of other consumers; and
wherein the video content is modified based further on the one of the plurality of predetermined aggregate consumption profiles associated with the first consumer.
However Kar teaches:
wherein the hardware processor is further configured to execute the content assessment software code to:
associate the first consumer, using the plurality of first engagement levels, with one of a plurality of predetermined aggregate consumption profiles identified based on usage of the video content by a plurality of other consumers (Kar, [0055]), “representing each user as a user vector, the user segment identification system considers each media content cluster as a dimension within a given user vector, as mentioned. In addition to representing each user as a user vector, the user segment identification system weights each dimension within each user vector”; (Kar, [0058]-[0059]), “… the user segment identification system has identified seven user segments 202-214. As shown in FIG. 2A, table 216 shows information relating to the consumption of media content by users within user cluster 212. As mentioned, user cluster 212 represents the 814 users who have content consumption histories similar enough that the user segment identification system clustered together after the two-step clustering algorithm. Each user segment 202-214 shown in FIGS. 2A and 2B represents user clusters that the user segment identification system grouped together by the above-mentioned steps”; and
wherein the video content is modified based further on the one of the plurality of predetermined aggregate consumption profiles associated with the first consumer (Kar, [0061]), “the user segment identification system provides segment-based media content recommendations. In particular, the user segment identification system provides content recommendations based on the two-step clustering algorithm. For example, for a user who has not consumed a particular item of media content that is within a media content cluster associated with the user, the user segment identification system provides that particular item of media content as a recommendation to the user…”; (Kar, [0062]).
At the time the invention was filed, it would have been obvious to a person of ordinary skill in the art to have modified the behavior profile scores in Kataria to include the consumer limitations as taught by Kar. The motivation for doing this would have been to improve the method of generating behavioral profiles based on quantitative behavior measures (see par. 0022) to efficiently include the results of accurately predicting user content consumption (see Kar par. 0021).
Referring to Claim 45, Kataria in view of Grosvenor in view of Amant teaches the video content analysis system of claim 38. Kataria teaches generating a customized library of videos based on a behavior profile score of a groups behavioral responses (see par. 0063), but Kataria does not explicitly teach:
wherein the first usage data include advertising consumption by the first consumer, wherein an advertisement is included in at least one of the plurality of timecode intervals.
However Kar teaches:
wherein the first usage data include advertising consumption by the first consumer, wherein an advertisement is included in at least one of the plurality of timecode intervals (Kar, [0019]), “the user segment identification system constructs behavioral models for groups (e.g., segments or clusters) of users based on the above-mentioned analyses of media content and user session logs…”; (Kar, [0052]-[0053]), “evaluates session logs of each media content”; (Kar, [0066]), “the user segment identification system 400 includes a session log evaluator 402. The session log evaluator can evaluate, analyze, extract, delineate, and otherwise pick out information relating to media content consumption by a user. For example, the session log evaluator 402 can analyze the content consumption history of a user, including times and/or amount of content consumed as well as the name of the item of media content, including any other relevant information. Furthermore, the session log evaluator 402 can learn a content consumption behavior model based on an analysis of content consumption history as described above”; (Kar, [0069]), “The user segment identifier 408 can interact with the session log evaluator 402 to group users together according to content consumption history. For example, the user segment identifier 408 can represent each user as a vector with a number of dimensions”.
At the time the invention was filed, it would have been obvious to a person of ordinary skill in the art to have modified the behavior profile scores in Kataria to include the consumer limitations as taught by Kar. The motivation for doing this would have been to improve the method of generating behavioral profiles based on quantitative behavior measures (see par. 0022) to efficiently include the results of accurately predicting user content consumption (see Kar par. 0021).
Claim 49 disclose substantially the same subject matter as Claim 39, and is rejected using the same rationale as previously set forth.
Claim 55 disclose substantially the same subject matter as Claim 45, and is rejected using the same rationale as previously set forth.
Claims 40 and 50 are rejected under 35 U.S.C. 103 as being unpatentable over Kataria et al., U.S. Publication No. 2019/0027248 [hereinafter Kataria], in view of Grosvenor, U.S. Publication No. 2005/0031296 [hereinafter Grosvenor], in view of St. Amant et al., U.S. Publication No. 2019/0282155 [hereinafter Amant], and further in view of Kolowich et al., U.S. Publication No. 2017/0286976 [hereinafter Kolowich].
Referring to Claim 40, Kataria in view of Grosvenor in view of Amant teaches the video content analysis system of claim 38. Kataria teaches generating multi-dimensional scores to generate a custom library of videos (see par. 0057), but Kataria does not explicitly teach:
wherein the hardware processor is further configured to execute the content assessment software code to:
create an engagement visualization map of the video content based on the plurality of engagement levels, wherein the engagement visualization map further comprises a heat map including the plurality of first engagement levels.
However Kolowich teaches:
wherein the hardware processor is further configured to execute the content assessment software code to:
create an engagement visualization map of the video content based on the plurality of engagement levels, wherein the engagement visualization map further comprises a heat map including the plurality of first engagement levels (Kolowich, [0060]), “FIG. 4B shows an example of the charts 410 rendered in the central interface 400 based on the tracking data 432… The pie chart on the right is an engagement heat map 414. This pie chart 414 shows the proportion of viewers who earned different engagement scores from 1 to 10, grouped in “Low,” “Medium,” and “High” categories”.
At the time the invention was filed, it would have been obvious to a person of ordinary skill in the art to have modified the scores in Kataria to include the heat map limitation as taught by Kolowich. The motivation for doing this would have been to improve the method of providing a customized library of videos to users in Kataria (see par. 0059) to efficiently include the results of generating tracking reports (see Kolowich par. 0061).
Claim 50 disclose substantially the same subject matter as Claim 40, and is rejected using the same rationale as previously set forth.
Claims 41 and 51 are rejected under 35 U.S.C. 103 as being unpatentable over Kataria et al., U.S. Publication No. 2019/0027248 [hereinafter Kataria], in view of Grosvenor, U.S. Publication No. 2005/0031296 [hereinafter Grosvenor], in view of St. Amant et al., U.S. Publication No. 2019/0282155 [hereinafter Amant], and further in view of Hu et al., U.S. Publication No. 2017/0063763 [hereinafter Hu].
Referring to Claim 41, Kataria in view of Grosvenor in view of Amant teaches the video content analysis system of claim 38. Kataria teaches generating a customized microlearning video library (see par. 0020) and a customized communication and outreach strategy (see par. 0063), but Kataria does not explicitly teach:
wherein the hardware processor is further configured to execute the content assessment software code to:
obtain marketing data identifying a channel of communication utilized to inform the first consumer about the video content prior to the use of the video content by the first consumer; and
correlate the first usage data with the marketing data to generate a marketing assessment.
However Hu teaches:
wherein the hardware processor is further configured to execute the content assessment software code to:
obtain marketing data identifying a channel of communication utilized to inform the first consumer about the video content prior to the use of the video content by the first consumer (Hu, [0047]), “, the conversion module 230 generates a short message based on the extracted text content, as described fully above in association with FIG. 4. At operation 650, the analysis module 240 selects at least one communication channel to send the short message based on an engagement level associated with the at least one communication channel. The engagement level is determined from previous interactions by the user with the respective communication channel”; (Hu, [0032]; [0043]); and
correlate the first usage data with the marketing data to generate a marketing assessment (Hu, [0047]), “the presentation module 250 causes presentation of the short message at a mobile device associated with the user via the selected communication channel… where the user does not interact with the current message being sent, after a predetermined period of time has elapse, a different channel of communication may be used to send the same message”; (Hu, [0044]), “the analysis module 240 continuously monitors the level of active engagement. If the percentage of active engagement falls below a threshold, the analysis module 240 selects another communication channel to send the short message… the percentage of active engagement fails to transgress the threshold of ten percent, therefore a different channel of communication is selected to send current and subsequent messages”.
At the time the invention was filed, it would have been obvious to a person of ordinary skill in the art to have modified the customized communication and outreach strategy in Kataria to include the marketing limitations as taught by Hu. The motivation for doing this would have been to improve the method of generating a customized, curated, microlearning video library, tailored to a user based on at least their patient behavior score in Hu (see par. 0005) to efficiently include the results of determining best communication channels to present short messages to users (see Hu par. 0045).
Claim 51 disclose substantially the same subject matter as Claim 41, and is rejected using the same rationale as previously set forth.
Claims 42 and 52 are rejected under 35 U.S.C. 103 as being unpatentable over Kataria et al., U.S. Publication No. 2019/0027248 [hereinafter Kataria], in view of Grosvenor, U.S. Publication No. 2005/0031296 [hereinafter Grosvenor], in view of St. Amant et al., U.S. Publication No. 2019/0282155 [hereinafter Amant], in view of Hu et al., U.S. Publication No. 2017/0063763 [hereinafter Hu], and further in view of McGovern et al., U.S. Publication No. 2017/0091810 [hereinafter McGovern].
Referring to Claim 42, Kataria in view of Grosvenor in view of Amant teaches the video content analysis system of claim 38. Kataria teaches generating a customized microlearning video library (see par. 0020) and a customized communication and outreach strategy (see par. 0063), but Kataria does not explicitly teach:
wherein the hardware processor is further configured to execute the content assessment software code to:
create an engagement visualization map of the video content based on the plurality of first engagement levels;
obtain marketing data identifying a channel of communication utilized to inform the first consumer about the video content, prior to the use of the video content by the first consumer; and
correlate the first usage data with the marketing data to generate a marketing assessment, wherein the engagement visualization map includes the marketing assessment.
However Hu teaches:
wherein the hardware processor is further configured to execute the content assessment software code to:
obtain marketing data identifying a channel of communication utilized to inform the first consumer about the video content, prior to the use of the video content by the first consumer (Hu, [0047]), “, the conversion module 230 generates a short message based on the extracted text content, as described fully above in association with FIG. 4. At operation 650, the analysis module 240 selects at least one communication channel to send the short message based on an engagement level associated with the at least one communication channel. The engagement level is determined from previous interactions by the user with the respective communication channel”; (Hu, [0032]; [0043]); and
correlate the first usage data with the marketing data to generate a marketing assessment (Hu, [0047]), “the presentation module 250 causes presentation of the short message at a mobile device associated with the user via the selected communication channel… where the user does not interact with the current message being sent, after a predetermined period of time has elapse, a different channel of communication may be used to send the same message”; (Hu, [0044]), “the analysis module 240 continuously monitors the level of active engagement. If the percentage of active engagement falls below a threshold, the analysis module 240 selects another communication channel to send the short message… the percentage of active engagement fails to transgress the threshold of ten percent, therefore a different channel of communication is selected to send current and subsequent messages”.
At the time the invention was filed, it would have been obvious to a person of ordinary skill in the art to have modified the customized communication and outreach strategy in Kataria to include the marketing limitations as taught by Hu. The motivation for doing this would have been to improve the method of generating a customized, curated, microlearning video library, tailored to a user based on at least their patient behavior score in Hu (see par. 0005) to efficiently include the results of determining best communication channels to present short messages to users (see Hu par. 0045).
Kataria teaches as the user watches and interacts with the videos, data about the user behavior is recorded in the user response storage (see par. 0058), but Kataria does not explicitly teach:
wherein the hardware processor is further configured to execute the content assessment software code to:
create an engagement visualization map of the video content based on the plurality of first engagement levels;
wherein the engagement visualization map includes the marketing assessment.
However McGovern teaches:
wherein the hardware processor is further configured to execute the content assessment software code to:
create an engagement visualization map of the video content based on the plurality of engagement levels (McGovern, [0058]), “the touchpoint attribute chart 2B00 shows a plurality of touchpoints…dataset of touchpoint attribute chart 2B00 comprises a time series of user level activity 234 that maps various touchpoints to a respective plurality of attributes”;
wherein the engagement visualization map includes the marketing assessment (McGovern, [0061]-[0063]), “engagement stacks” Examiner considers the engagement stacks to be the market assessment and the engagement stack contribution chart to be the visualization map.
At the time the invention was filed, it would have been obvious to a person of ordinary skill in the art to have modified the interaction data in Kataria to include the visualization limitations as taught by McGovern. The motivation for doing this would have been to improve the method of generating a customized, curated, microlearning video library, tailored to a user based on at least their patient behavior score in Hu (see par. 0005) to efficiently include the results of classifying and quantifying audience responses to Internet stimulation (see McGovern par. 0003).
Claim 52 disclose substantially the same subject matter as Claim 42, and is rejected using the same rationale as previously set forth.
Claims 43 and 53 are rejected under 35 U.S.C. 103 as being unpatentable over Kataria et al., U.S. Publication No. 2019/0027248 [hereinafter Kataria], in view of Grosvenor, U.S. Publication No. 2005/0031296 [hereinafter Grosvenor], in view of St. Amant et al., U.S. Publication No. 2019/0282155 [hereinafter Amant], and further in view of Chennavasin et al., U.S. Publication No. 2021/0090127 [hereinafter Chennavasin].
Referring to Claim 43, Kataria in view of Grosvenor in view of Amant teaches the video content analysis system of claim 38. Kataria teaches online viewing habits (see par. 0030), but Kataria does not explicitly teach:
wherein the behavioral information includes at least one of whether the first consumer is physically active, situated outdoors, or traveling during the use of the video content by the first consumer.
However Chennavasin teaches:
wherein the behavioral information includes at least one of whether the first consumer is physically active, situated outdoors, or traveling during the use of the video content by the first consumer (Chennavasin, [0121]), “current consumer activity data that do not relate to purchase activities 646 (e.g., a motion status of the consumer) may or may not be directly compared to aspects of promotions… The cluster analysis circuitry 622 may… process and analyze promotion purchase patterns associated with a plurality of consumers who share one or more common current consumer activity data items with the specific consumer. In other words, a plurality of consumers may be defined as a cohort of the specific consumer based on one or more current activity data items, for example, consumers who purchased promotions while or shortly before or after exercising, consumers who purchased promotions during travel, and the like. The promotion purchase patterns of this cohort may then be analyzed to determine whether the specific consumer is likely to purchase each of the promotions indicated in the promotion data 602. The implicit notion is that consumers who are performing the same or similar activities are likely to have similar interests in promotions and similar purchase patterns. For example, consumers who are exercising may be more likely to purchase water or sports drinks. The cluster analysis circuitry 622 may implement any suitable technique including, but not limited to, statistical analysis, machine learning, and the like”; (Chennavasin, [0042]), “the term “promotion” may include, but is not limited to, any type of offered… media or the like…”; (Chennavasin, [0064]), ““interest indication” refers to an indication generated by a consumer in relation to a promotion, the interest indication indicating one or more of… viewing of the promotion by the consumer”; (Chennavasin, [0210]).
At the time the invention was filed, it would have been obvious to a person of ordinary skill in the art to have modified the viewing habits in Kataria to include the behavioral information as taught by Chennavasin. The motivation for doing this would have been to improve the method of generating a customized, curated, microlearning video library, tailored to a user based on at least their patient behavior score in Hu (see par. 0005) to efficiently include the results of providing electronic marketing communications of promotions to consumers (see Chennavasin par. 0002).
Claim 53 disclose substantially the same subject matter as Claim 43, and is rejected using the same rationale as previously set forth.
Claims 46 and 56 are rejected under 35 U.S.C. 103 as being unpatentable over Kataria et al., U.S. Publication No. 2019/0027248 [hereinafter Kataria], in view of Grosvenor, U.S. Publication No. 2005/0031296 [hereinafter Grosvenor], in view of St. Amant et al., U.S. Publication No. 2019/0282155 [hereinafter Amant], and further in view of Rahman, U.S. Publication No. 2019/0098345 [hereinafter Rahman].
Referring to Claim 46, Kataria in view of Grosvenor in view of Amant teaches the video content analysis system of claim 38. Kataria teaches a customized communication and outreach strategy (see par. 0063) and online viewing habits (see par. 0030), but Kataria does not explicitly teach:
wherein the hardware processor is further configured to execute the content assessment software code to:
create an engagement visualization map of the video content based on the plurality of first engagement levels, wherein the engagement visualization map comprises an advertisement indicator indicating a total number of advertisements present in each of the plurality of timecode intervals that includes at least one advertisement.
However Rahman teaches:
wherein the hardware processor is further configured to execute the content assessment software code to:
create an engagement visualization map of the video content based on the plurality of first engagement levels, wherein the engagement visualization map comprises an advertisement indicator indicating a total number of advertisements present in each of the plurality of timecode intervals that includes at least one advertisement (Rahman, [0038]), “a content maps for different segments and/or users consistent with embodiments of the present disclosure. As discussed above, information describing where advertisements are rendered within content (e.g., a content stream), how many advertisements are rendered, their frequency, their duration, and/or their type (e.g., video, long form, short form, still image, audio, and/or the like) may be reflected in a content map associated with the content generated by a content service and/or another service (e.g., a content mapping service)”; (Rahman, [0039]), “a content map may be selected for a particular segment based on how well suited the content map is for achieving certain content monetization objectives (e.g., advertisement realization rates), content viewing and/or impression objectives (e.g., total number of views), user engagement objectives, user feedback objectives and/or the like”; (Rahman, [0040]), “content mapping, a first content stream 200 associated with a first user may be associated with a content map articulating a pre-roll advertisement break… the number of advertisements, their frequency, and/or the duration of the advertisements may be tailored to the specific users based on their identified segments by using different content maps for each user”; (Rahman, [0033]; [0039]).
At the time the invention was filed, it would have been obvious to a person of ordinary skill in the art to have modified the outreach strategy in Kataria to include the visualization map as taught by Rahman. The motivation for doing this would have been to improve the method of generating a customized, curated, microlearning video library, tailored to a user based on at least their patient behavior score in Hu (see par. 0005) to efficiently include the results of dynamically mapping content for advertisement presentation based on information associated with a user (see Rahman par. 0003).
Claim 56 disclose substantially the same subject matter as Claim 46, and is rejected using the same rationale as previously set forth.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Conklin et al. (US 20160294907 A1) – Methods and systems for managing the playback of media content via a website accessed by a user computer are described. According to aspects, the methods and systems may access and retrieve various data associated with media content such as engagement data related to an interaction by a user with the media content playback, as well as social media data relating to playback of a set of media content by a set of additional users. The methods and systems may analyze any combination of the data to identify a relevant media file that may be of interest to the user and provide the media file to the user computer for playback by the user. The analysis models may be continuously updated and used to improve media selection and streamline partnerships with third-party entities.
Girouard et al. (US 8171509 B1) - A system and method for applying a database to video multimedia is disclosed. One embodiment provides media content owners the capability to exploit video processing capabilities using rich, interactive and compelling visual content on a network. Mechanisms of associating video with commerce offerings are provided. Video server and search server technologies are integrated with ad serving personalization agents to make the final presentations of content and advertising. Algorithms utilized by the system use a variety of techniques for making the final presentation decisions of which ads, with which content, served to which user.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Crystol Stewart whose telephone number is (571)272-1691. The examiner can normally be reached on 9:00am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patty Munson can be reached on (571)270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CRYSTOL STEWART/Primary Examiner, Art Unit 3624