DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
This communication is in response to the remarks and amendments filed 11/11/2025. Claims 1-8 and 10-20 are pending.
Response to Remarks
Applicant’s arguments have been carefully and respectfully considered in light of the instant amendment, but are not persuasive. Accordingly, this action has been made FINAL.
Claim Objections
The objections to the claims are removed in accordance with the remarks and amendments filed.
Claim Rejections - 35 USC § 101
The 101 claim rejections are removed in accordance with the remarks and amendments filed.
Claim Rejections - 35 USC § 102 and 103
On pages 7-8 of the remarks, Applicant argues Deora does not disclose “sending feedback to the creator of the multimedia content”. Specifically, Applicant asserts Deora paragraph [0022] refers to “receiving multimedia content, not receiving feedback from the creator” and additionally alleges Deora “only discloses how to download multimedia content from the perspective of the viewer of the multimedia content, and how to play the multimedia content, but does not disclose sending feedback to the creator of the multimedia content.”
The Examiner respectfully disagrees with Applicant’s interpretation of Deora. More specifically, Deora is clear that the content is a data file received via an upload or download and includes the analysis of the file (e.g., the file may be one or more video, graphical, audio etc.) containing trigger content (see [0022]). This type of analyzing is effective to users and creators as described at [0021] “few existing content creators actively ensure that their video, graphical etc. content would not cause [photosensitive epilepsy] to occur, or provide users with control over the content being served to them”. The creator/user are thereby ensured that the file (e.g., video) is analyzed and are provided with options to skip the trigger content, choose to view the trigger content, or may also be presented with any other actions such as deletion of the file, modification of the file, removal of the triggering content, or addition of further warnings (see Deora, [0022]). Thus, as disclosed in Deora, the notification allows creators to delete, remove and/or modify the uploaded video file after the video file has been analyzed to have photosensitive triggering content.
Evidently, as explained above, the combination of references of Li and Deora meet all the limitations of the independent claim(s). More specifically, Li discloses acquiring videos with photosensitive tags and videos without a photosensitive tag (Li, [0032], [0041]); a photosensitive video recognition model trained by a training set (Li, [0036]-[0037], [0041]-[0046]) and displaying videos that do not have a photosensitive tag to a browsing user when the user is in a state of photosensitive video filtering (Li, [0083]). Deora discloses the limitations of sending a notification to a creator in response to determining a video uploaded is a photosensitive video and a recognition result of the uploaded video (Deora, [0021]-[0022]). In response to applicant's arguments, arguendo, against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Therefore, the argued limitations were written broad such that they read upon the cited references or are shown explicitly by the references. As a result, the claims stand rejected as explained above and below in the 103 rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3-8, 12-14, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (US 20220182717 A1) in view of Deora et al. (US 20230007347 A1).
Regarding claim 1, Li discloses a push method of video, comprising: acquiring videos comprising video(s) with a photosensitive tag and video(s) without a photosensitive tag (“firstly detecting whether a shield instruction is received, and when the shield instruction is detected, then automatically performing a shield processing on a target parameter in the multimedia data satisfying a preset condition, the present disclosure can reach a function capable of performing unified shielding processing on all multimedia data that may cause user's physical discomfort” Li, [0032]; i.e., the shield instruction is equivalent to that of a photosensitive tag which indicates multimedia, or video(s), that contain photosensitive content such as a flicker frequency and videos are further disclosed using a mark, specifically, “the terminal device can perform the detection by means of a mark carried by the multimedia data itself, to detect whether the target parameter in the multimedia data to be played satisfies the preset condition.” Li, [0041]), wherein a video has a photosensitive tag the video it is determined as a photosensitive video by a photosensitive video recognition model (“detect whether a shield instruction for multimedia data is received, wherein the shield instruction can indicate that a shield processing is performed on multimedia data whose target parameter satisfies a preset condition. Among them, the target parameter can have a variety of types, and different types of target parameters can correspond to different preset conditions. Specifically, the target parameter may be a light flicker frequency.” Li, [0036]-[0037]; “the terminal device can perform the detection by means of a mark carried by the multimedia data itself, to detect whether the target parameter in the multimedia data to be played satisfies the preset condition.” Li, [0041]; wherein the photosensitive video recognition model is that of a pre-trained prediction model, or parameter detection model, see Li [0041]-[0046] “When the detection is performed by a pre-trained detection model, the specific implementation manner of detecting whether the target parameter in the multimedia data to be played satisfies the preset condition can include detecting whether the target parameter in the multimedia data to be played satisfies the preset condition based on a parameter detection model... By training a parameter detection model by utilizing light flicker frequency sample data carrying a training mark, it is possible to train a parameter detection model capable of detecting multimedia data whose light flicker frequency is in the range of 5-30 times per second”), the photosensitive video recognition model is trained by a training set, (“the parameter detection model can be trained from target parameter samples. Different target parameters correspond to different target parameter samples, and different target parameters samples can train the parameter detection models for detecting different target parameters. As an example, when the target parameter is a light flicker frequency, the corresponding target parameter samples are the light flicker frequency data carrying training marks, for example, in an example that a parameter detection model capable of detecting multimedia data whose light flicker frequency is in the range of 5-30 times per second is trained” Li, [0043]) and displaying video(s) that do not have a photosensitive tag among the videos to a browsing user in case that the browsing user is in a state of photosensitive video filtering (“for the multimedia data whose target parameters do not satisfy the preset conditions, turning on or turning off the shield instruction does not influence the multimedia data to be played whose target parameters do not satisfy the preset conditions, and the terminal device can directly play the multimedia data whose target parameters do not satisfy the preset conditions.” Li, [0083]; i.e., the videos that do not meet the target parameters, such as the shield instruction or mark, will be played, or pushed to the user, for display).
Li discloses all of the subject matter as described above except for specifically teaching a notification is sent to a creator according to communication information provided by the creator in response to determining a video uploaded by the creator is a photosensitive video, the notification comprises a recognition result of the uploaded video and knowledge about photosensitive videos. However, Deora in the same field of endeavor teaches a notification is sent to a creator according to communication information provided by the creator in response to determining a video uploaded by the creator is a photosensitive video, the notification comprises a recognition result of the uploaded video and knowledge about photosensitive videos (a data file received via an upload or download and includes the analysis of the file (e.g., the file may be one or more video, graphical, audio etc.) containing trigger content, see Deora at [0022]; That is, the file (e.g., video) analyzing is effective to users and creators as described at [0021] “few existing content creators actively ensure that their video, graphical etc. content would not cause [photosensitive epilepsy] to occur, or provide users with control over the content being served to them”. The creator and/or user are thereby ensured that the file (e.g., video) is analyzed and are provided with options to skip the trigger content, choose to view the trigger content, or may also be presented with any other actions such as deletion of the file, modification of the file, removal of the triggering content, or addition of further warnings, see Deora at [0022]).
Therefore, it would have been obvious to one of ordinary skill in the art to combine Li and Deora before the effective filing date of the claimed invention. The motivation for this combination of references would have been to account for users suffering from photosensitivity epilepsy when content creators create and upload a video (Deora, [0001]). This motivation for the combination of Li and Deora is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III).
Regarding claim 3, Li and Deora disclose the push method according to claim 1, wherein for each video, the video has a photosensitive tag if a photosensitive effect has been added to the video (“As an example, when the target parameter is a light flicker frequency, the corresponding target parameter samples are the light flicker frequency data carrying training marks, for example, in an example that a parameter detection model capable of detecting multimedia data whose light flicker frequency is in the range of 5-30 times per second” Li, [0043]; “Furthermore, it is also possible to set the first mark to 1 to indicate that the target parameter satisfies the preset condition, and to set the first mark to 0 to indicate that the target parameter does not satisfy the preset condition.” Li, [0046]).
Regarding claim 4, Li and Deora disclose the push method according to claim 1, further comprising: pushing the videos to the browsing user if the browsing user is not in the state of photosensitive video filtering (“in the case of covering the playing interface of the multimedia data to be played by the first shield mask layer and playing the multimedia data after shielding, the user can turn off the shield function” Li, [0077]; [0055]).
Regarding claim 5, Li and Deora disclose the push method according to claim 1, further comprising: displaying an option control for selecting whether to filter photosensitive videos in the case that a video to be played is a video with a photosensitive tag (“the setup interface includes the shield instruction option switch thereon, and in response to the close operation acting on the shield instruction option switch, the shield function for multimedia data playing is turned off.” Li, [0072]; Fig. 8); receiving an operation of the browsing user on the option control (“it is possible to turn on or off the mask function for the multimedia data playing by means of a shield instruction option switch, before multimedia data is played” Li, [0048]); and setting the browsing user to the state of photosensitive video filtering if the operation indicates filtering of photosensitive videos (“when the terminal device determines that the shield instruction for the multimedia data is received (e.g., in response to the first touch operation acting on the shield control provided on the first prompt page, or in response to a turn-on operation acting on the shield instruction option switch provided on the setup interface), the server may not send the multimedia data whose target parameters satisfy the preset conditions to the terminal device.” Li, [0070]).
Regarding claim 6, Li and Deora disclose the push method according to claim 5, wherein the option control for selecting whether to filter photosensitive videos is displayed in response to the video to be played back being a video with a photosensitive tag and is accessed for the first time by the browsing user under the same account or on the same terminal (“the user can turn on or turn off the shield function for multimedia data playing by the shield instruction option switch, before the multimedia data is played or when the application of playing the multimedia data is activated.” Li, [0073]).
Regarding claim 7, Li and Deora disclose the push method according to claim 5, wherein the option control is located on a mask layer that is located over the video to be played (“after covering the playing interface of the multimedia data to be played by the first shield mask layer, and playing the multimedia data after shielding, the terminal device can display the setup control on the first shield mask layer” Li, [0080]).
Regarding claim 8, Li and Deora disclose a push method of video, comprising: acquiring a video uploaded by a creator (“perform analysis of the data files that have been uploaded/downloaded and/or stored in a storage location” Deora, [0032]; i.e., the uploaded data files pertain to a creator/user which includes a video, see Deora [0021]-[0022]); processing the uploaded video using a photosensitive video recognition model to determine whether the uploaded video is a photosensitive video (“When the detection is performed by a pre-trained detection model, the specific implementation manner of detecting whether the target parameter in the multimedia data to be played satisfies the preset condition can include detecting whether the target parameter in the multimedia data to be played satisfies the preset condition based on a parameter detection model... By training a parameter detection model by utilizing light flicker frequency sample data carrying a training mark, it is possible to train a parameter detection model capable of detecting multimedia data whose light flicker frequency is in the range of 5-30 times per second” Li [0041]-[0046]), the photosensitive video recognition model is trained by a training set, (“the parameter detection model can be trained from target parameter samples. Different target parameters correspond to different target parameter samples, and different target parameters samples can train the parameter detection models for detecting different target parameters. As an example, when the target parameter is a light flicker frequency, the corresponding target parameter samples are the light flicker frequency data carrying training marks, for example, in an example that a parameter detection model capable of detecting multimedia data whose light flicker frequency is in the range of 5-30 times per second is trained” Li, [0043]); adding a photosensitive tag to the uploaded video (Deora, [0021]-[0022], and [0032]) in response to the uploaded video being a photosensitive video (“By training a parameter detection model by utilizing light flicker frequency sample data carrying a training mark, it is possible to train a parameter detection model capable of detecting multimedia data whose light flicker frequency is in the range of 5-30 times per second... Furthermore, it is also possible to set the first mark to 1 to indicate that the target parameter satisfies the preset condition, and to set the first mark to 0 to indicate that the target parameter does not satisfy the preset condition.” Li [0041]-[0046]); displaying a video without a photosensitive tag to a browsing user if the browsing user is in a state of photosensitive video filtering (“for the multimedia data whose target parameters do not satisfy the preset conditions, turning on or turning off the shield instruction does not influence the multimedia data to be played whose target parameters do not satisfy the preset conditions, and the terminal device can directly play the multimedia data whose target parameters do not satisfy the preset conditions.” Li, [0083]; i.e., the videos that do not meet the target parameters, such as the shield instruction or mark, will be played, or pushed to the user, for display); sending, according to communication information provided by the creator, a notification to the creator in response to determining the uploaded video being a photosensitive video, wherein the notification comprises a recognition result of the uploaded video and knowledge about the photosensitive videos(a data file received via an upload or download and includes the analysis of the file (e.g., the file may be one or more video, graphical, audio etc.) containing trigger content, see Deora at [0022]; That is, the file (e.g., video) analyzing is effective to users and creators as described at [0021] “few existing content creators actively ensure that their video, graphical etc. content would not cause [photosensitive epilepsy] to occur, or provide users with control over the content being served to them”. The creator and/or user are thereby ensured that the file (e.g., video) is analyzed and are provided with options to skip the trigger content, choose to view the trigger content, or may also be presented with any other actions such as deletion of the file, modification of the file, removal of the triggering content, or addition of further warnings, see Deora at [0022]).
Therefore, combining Li and Deora would meet the claim limitations for the same reasons as previously discussed in claim 1.
Regarding claim 12, Li and Deora disclose a push apparatus of video, comprising: a memory; and a processor coupled to the memory, the processor configured to, based on instructions stored in the memory, carry out the push method of video according to claim 1 (memory and processor, Li, [0118], [0121]).
Regarding claim 13, Li and Deora disclose a push apparatus of video, comprising: a memory; and a processor coupled to the memory, the processor configured to, based on instructions stored in the memory, carry out the push method of video according to claim 8 (memory and processor, Li, [0118], [0121]).
Regarding claim 14, Li and Deora disclose a non-transitory computer-readable storage medium (Li, [0118], [0121]) having stored thereon a computer instructions which, when executed by a processor, cause the processor to (processor, Li, [0118]): acquire videos comprising video(s) with a photosensitive tag and video(s) without a photosensitive tag (“firstly detecting whether a shield instruction is received, and when the shield instruction is detected, then automatically performing a shield processing on a target parameter in the multimedia data satisfying a preset condition, the present disclosure can reach a function capable of performing unified shielding processing on all multimedia data that may cause user's physical discomfort” Li, [0032]; i.e., the shield instruction is equivalent to that of a photosensitive tag which indicates multimedia, or video(s), that contain photosensitive content such as a flicker frequency and videos are further disclosed using a mark, specifically, “the terminal device can perform the detection by means of a mark carried by the multimedia data itself, to detect whether the target parameter in the multimedia data to be played satisfies the preset condition.” Li, [0041]), wherein a video has a photosensitive tag if the video is determined as a photosensitive video by a photosensitive video recognition model (“detect whether a shield instruction for multimedia data is received, wherein the shield instruction can indicate that a shield processing is performed on multimedia data whose target parameter satisfies a preset condition. Among them, the target parameter can have a variety of types, and different types of target parameters can correspond to different preset conditions. Specifically, the target parameter may be a light flicker frequency.” Li, [0036]-[0037]; “the terminal device can perform the detection by means of a mark carried by the multimedia data itself, to detect whether the target parameter in the multimedia data to be played satisfies the preset condition.” Li, [0041]; wherein the photosensitive video recognition model is that of a pre-trained prediction model, or parameter detection model, see Li [0041]-[0046] “When the detection is performed by a pre-trained detection model, the specific implementation manner of detecting whether the target parameter in the multimedia data to be played satisfies the preset condition can include detecting whether the target parameter in the multimedia data to be played satisfies the preset condition based on a parameter detection model... By training a parameter detection model by utilizing light flicker frequency sample data carrying a training mark, it is possible to train a parameter detection model capable of detecting multimedia data whose light flicker frequency is in the range of 5-30 times per second”), the photosensitive video recognition model is trained by a training set, (“the parameter detection model can be trained from target parameter samples. Different target parameters correspond to different target parameter samples, and different target parameters samples can train the parameter detection models for detecting different target parameters. As an example, when the target parameter is a light flicker frequency, the corresponding target parameter samples are the light flicker frequency data carrying training marks, for example, in an example that a parameter detection model capable of detecting multimedia data whose light flicker frequency is in the range of 5-30 times per second is trained” Li, [0043]), a notification is sent to a creator according to communication information provided by the creator in response to determining a video uploaded by the creator is a photosensitive video, the notification comprises a recognition result of the uploaded video and knowledge about photosensitive videos (a data file received via an upload or download and includes the analysis of the file (e.g., the file may be one or more video, graphical, audio etc.) containing trigger content, see Deora at [0022]; That is, the file (e.g., video) analyzing is effective to users and creators as described at [0021] “few existing content creators actively ensure that their video, graphical etc. content would not cause [photosensitive epilepsy] to occur, or provide users with control over the content being served to them”. The creator and/or user are thereby ensured that the file (e.g., video) is analyzed and are provided with options to skip the trigger content, choose to view the trigger content, or may also be presented with any other actions such as deletion of the file, modification of the file, removal of the triggering content, or addition of further warnings, see Deora at [0022]); and display video(s) that do not have a photosensitive tag among the videos to a browsing user in case that the browsing user is in a state of photosensitive video filtering (“for the multimedia data whose target parameters do not satisfy the preset conditions, turning on or turning off the shield instruction does not influence the multimedia data to be played whose target parameters do not satisfy the preset conditions, and the terminal device can directly play the multimedia data whose target parameters do not satisfy the preset conditions.” Li, [0083]; i.e., the videos that do not meet the target parameters, such as the shield instruction or mark, will be played, or pushed to the user, for display).
Therefore, combining Li and Deora would meet the claim limitations for the same reasons as previously discussed in claim 1.
Regarding claim 16, Li and Deora disclose the non-transitory computer-readable storage medium according to claim 14, wherein for each video, the video has a photosensitive tag if a photosensitive effect has been added to the video (“As an example, when the target parameter is a light flicker frequency, the corresponding target parameter samples are the light flicker frequency data carrying training marks, for example, in an example that a parameter detection model capable of detecting multimedia data whose light flicker frequency is in the range of 5-30 times per second” Li, [0043]; “Furthermore, it is also possible to set the first mark to 1 to indicate that the target parameter satisfies the preset condition, and to set the first mark to 0 to indicate that the target parameter does not satisfy the preset condition.” Li, [0046]).
Regarding claim 17, Li and Deora disclose the non-transitory computer-readable storage medium according to claim 14, wherein the instructions further cause the processor to: push the videos to the browsing user if the browsing user is not in the state of photosensitive video filtering (“in the case of covering the playing interface of the multimedia data to be played by the first shield mask layer and playing the multimedia data after shielding, the user can turn off the shield function” Li, [0077]; [0055]).
Regarding claim 18, Li and Deora disclose a non-transitory computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the push method of video according to claim 8 (computer program product, memory and processor, Li, [0118], [0121]).
Regarding claim 19, Li and Deora disclose a non-transitory computer program product that when running on a computer causes the computer to implement the push method of video according to claim 1 (computer program product, memory and processor, Li, [0118], [0121]).
Regarding claim 20, Li and Deora disclose a non-transitory computer program product that when running on a computer causes the computer to implement the push method of video according to claim 8 (computer program product, memory and processor, Li, [0118], [0121]).
Claim(s) 2, 10-11 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. in view of Deora et al. and in further view of Pau et al. (US 20200366959 A1).
Regarding claim 2, Li and Deora disclose the push method according to claim 1, further comprising: acquiring feedback of the browsing user on a browsed video (“The user may be presented with an option to skip the triggering content, in which case, the playback of the data file may resume after the second timestamp (corresponding to the end of the triggering content). Alternatively, or in addition to, the user may choose to view the triggering content.” Deora, [0022]; i.e., the browsed video is in the form of a “data file may be received (e.g., uploaded, downloaded, etc.)” Deora, [0022]); and marking the browsed video as a photosensitive video and... indicates that the browsed video is a photosensitive video (“By training a parameter detection model by utilizing light flicker frequency sample data carrying a training mark, it is possible to train a parameter detection model capable of detecting multimedia data whose light flicker frequency is in the range of 5-30 times per second... Furthermore, it is also possible to set the first mark to 1 to indicate that the target parameter satisfies the preset condition, and to set the first mark to 0 to indicate that the target parameter does not satisfy the preset condition.” Li [0041]-[0046])
The combination of Li and Deora as whole does not expressly disclose adding the browsed video to a training set based on a case feedback.
However, Pau in the same field of endeavor teaches adding the browsed video to a training set (“User feedback 585 on whether the score 544 (504) predicted by the algorithm 513 (low/medium/high) is accurate is fed back 590 into the machine learning model 513 which is then used to iteratively improve the model 513 over time.” Pau, [0107]; i.e., machine learning models may use the feedback of the user to further improve the accuracy of a model iteratively over time).
Therefore, it would have been obvious to one of ordinary skill in the art to combine Li, Deora and Pau before the effective filing date of the claimed invention. The motivation for this combination of references would have been to improve the accuracy of the machine learning model using the user’s feedback (Pau, [0107]). This motivation for the combination of Li, Deora and Pau is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III).
Regarding claim 10, Li, Deora and Pau disclose the push method according to claim 8, further comprising: receiving feedback sent by the creator after sending the notification to the creator (“The user may be presented with an option to skip the triggering content, in which case, the playback of the data file may resume after the second timestamp (corresponding to the end of the triggering content). Alternatively, or in addition to, the user may choose to view the triggering content.” Deora, [0022]; i.e., the browsed video is in the form of a “data file may be received (e.g., uploaded, downloaded, etc.)” Deora, [0022]); and marking the uploaded video as a non-photosensitive video and adding it to a training set if the feedback (“User feedback 585 on whether the score 544 (504) predicted by the algorithm 513 (low/medium/high) is accurate is fed back 590 into the machine learning model 513 which is then used to iteratively improve the model 513 over time.” Pau, [0107]; i.e., machine learning models may use the feedback of the user to further improve the accuracy of a model iteratively over time) indicates that the uploaded video is not a photosensitive video, wherein the training set is used to train the photosensitive video recognition model (“the parameter detection model can be trained from target parameter samples. Different target parameters correspond to different target parameter samples, and different target parameters samples can train the parameter detection models for detecting different target parameters. As an example, when the target parameter is a light flicker frequency, the corresponding target parameter samples are the light flicker frequency data carrying training marks, for example, in an example that a parameter detection model capable of detecting multimedia data whose light flicker frequency is in the range of 5-30 times per second is trained” Li, [0043]).
Therefore, combining Li, Deora and Pau would meet the claim limitations for the same reasons as previously discussed in claim 2.
Regarding claim 11, Li, Deora and Pau disclose the push method according to claim 8, wherein the photosensitive video recognition model (pre-trained parameter detection model as disclosed by Li, [0043]) is a neural network model (“a rule-based algorithm, a heuristic machine learning algorithm (e.g., a deep neural network, hereinafter “predictive analytics algorithm”) or both, to create one or more sets of identifiers consistent with the input parameters.” Pau, [0059]).
Therefore, combining Li, Deora and Pau would meet the claim limitations for the same reasons as previously discussed in claim 2.
Regarding claim 15, Li, Deora and Pau disclose the non-transitory computer-readable storage medium according to claim 14, wherein the instructions further cause the processor to (memory and processor, Li, [0118]): acquire feedback of the browsing user on a browsed video (“The user may be presented with an option to skip the triggering content, in which case, the playback of the data file may resume after the second timestamp (corresponding to the end of the triggering content). Alternatively, or in addition to, the user may choose to view the triggering content.” Deora, [0022]; i.e., the browsed video is in the form of a “data file may be received (e.g., uploaded, downloaded, etc.)” Deora, [0022]); and, mark the browsed video as a photosensitive video and adding the browsed video to a training set in case that the feedback (“User feedback 585 on whether the score 544 (504) predicted by the algorithm 513 (low/medium/high) is accurate is fed back 590 into the machine learning model 513 which is then used to iteratively improve the model 513 over time.” Pau, [0107]; i.e., machine learning models may use the feedback of the user to further improve the accuracy of a model iteratively over time) indicates that the browsed video is a photosensitive video, wherein the training set is used to train the photosensitive video recognition model (“the parameter detection model can be trained from target parameter samples. Different target parameters correspond to different target parameter samples, and different target parameters samples can train the parameter detection models for detecting different target parameters. As an example, when the target parameter is a light flicker frequency, the corresponding target parameter samples are the light flicker frequency data carrying training marks, for example, in an example that a parameter detection model capable of detecting multimedia data whose light flicker frequency is in the range of 5-30 times per second is trained” Li, [0043]).
Therefore, combining Li, Deora and Pau would meet the claim limitations for the same reasons as previously discussed in claim 2.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Inquiries
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EMMANUEL SILVA-AVINA whose telephone number is (571)270-0729. The examiner can normally be reached Monday - Friday 11 AM - 8 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EMMANUEL SILVA-AVINA/Examiner, Art Unit 2673
/CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673