Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This office action is in response to the Amendment filed 11/14/2025. In the instant Amendment, claims 1, 6, 9, 11, 12, 15-17 and 19-20 are amended; claims 1, 12 and 20 are independent claims. Claims 1-20 are pending in this application. THIS ACTION IS MADE FINAL.
Response to Arguments
The 35 U.S.C. 101 rejection to claims 1-20 is withdrawn as per applicant’s amendment filed 11/14/2025.
Applicant’s arguments with respect to claim(s) 1, 12 and 20 with regard to the limitations “prevent the user from accessing the computer resource when either the respective selected label of the first animation is not he correct label for the first animation and the respective selected label of the of the second animation is not the respective correct label for the second animation for the second animation; and associate the respective selected label for the third animation with the third animation in the one or more memories as a potential label for the third animation in response to allowing the user to access the computer resource,” have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant argues that (on pages 8-10); that the cited prior art fails to explicitly disclose or suggest “transmit at least three animations for display to the user on a client device, wherein at least a first animation and a second animation of the at least three animations are associated with a respective correct label and at least a third animation of the animations is not associated with a correct label; receive from the client device, a respective selected label for each of the at least three animations; allow the user to access the computer resource in response to the respective selected label of the first animation being respective correct label for the first animation and the respective selected label of the second animation being the respective correct label for the second animation.”
In response the Examiner respectfully disagrees with the applicant. Hua discloses sending animations to display on a client device wherein the animations are associated with an incorrect label (See Hua, [0120], [0020], [0085]; also see [0035], [0074]).
Pham discloses receive, from the client device, an input of the respective selected label for each of the at least three animations (See Pham, FIG 1C; Col. 6-Col.7; Col 1).
Pham discloses allow the user to access the computer resource in response to the respective selected label of the first animation being respective correct label for the first animation and the respective selected label of the second animation being the respective correct label for the second animation (See Pham, Col. 6, Col. 10-Col. 11, Col. 15, FIG 1C).
Applicant's arguments (page 10): Additionally, as to the dependent claims 2-11 and 13-19 the Applicant argues that the claims are dependent directly or indirectly from a respective one of claims of independent claims 1, 12 and 20 and are therefore distinguished from the cited art at least by virtue OR allowable at least based on of their additionally recited patentable subject matter. The Examiner disagrees with the Applicant. The Examiner respectfully submits that dependent claims 2-11 and 13-19 are rejected at least based on the rationale and resource presented to the argument for their respective based claims, and the reference applied to the dependent claims 2-11 and 13-19.
Therefore, in view of the above reasons, the Examiner maintains the rejection with the cited prior art references.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 and 4-11 are rejected under 35 U.S.C. 103 as being unpatentable over Pham et al (“Pham,” US 10,838,066), Hua et al (“Hua,” US 20160217349) in view of Gargi et al (“Gargi,” US 8,510,795) and further in view of Kluever et al (“Kluever,” “Balancing Usability and Security in a Video CAPTCHA,” 2009, Pages 1-10).
Regarding claim 1, Pham discloses an apparatus for verifying whether to allow a user access to a resource, comprising:
one or more memories storing computer-executable instructions; and (Pham, Col. 18, Line 1 describes one or more memories storing computer-executable instructions)
one or more processors configured to execute the instructions, wherein execution of the instructions causes the apparatus to: (Pham, Col. 18, Line 1 describes one or more processors configured to execute the instructions, wherein execution of the instructions causes the apparatus to)
receive, from the client device, a respective selected label for each of the at least three animations; (Pham, describes in FIG 1C receive, from the client device, (Col. 7, Lines 40-44) an input of a respective selected label (Col. 1, Lines 19-38, FIG 1C) for each of the at least three animations (Col. 6, Lines 1-13)
allow to the user to access the computer resource in response to the respective selected label of the first animation being the respective correct label for the first animation and the respective selected label of the second animation being the respective correct label for the second animation; and (Pham, describes allow access (Col. 15, Lines 2-5) to the user based at least in part on the respective selected label of the first animation (Col. 6, Lines 1-13, FIG 1C) being the respective correct label (Col. 10, Lines 51-67; Col. 11, Lines 1-9) for the first animation (Col. 6, Lines 1-13) and the respective selected label of the second animation (Col. 6, Lines 1-13) being the respective correct label (Col. 10, Lines 51-67; Col. 11, Lines 1-9) for the second animation (Col. 6, Lines 1-13)
Pham fails to explicitly disclose transmit at least three animations for display to the user on a client device, wherein at least a first animation and a second animation of the at least three animations are associated with a respective correct label and at least a third animation of the animations is not associated with a correct label;
However, in an analogous art, Hua discloses transmit at least three animations for display to the user on a client device, wherein at least a first animation and a second animation of the at least three animations are associated with a respective correct label and at least a third animation of the animations is not associated with a correct label; (Hua, [0120], [0020], [0085] sending animations to display on a client device wherein the animations are associated with an incorrect label as described in [0035], [0074])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hua with the method/system of Pham to include transmit at least three animations for display to the user on a client device, wherein at least a first animation and a second animation of the at least three animations are associated with a respective correct label and at least a third animation of the animations is not associated with a correct label. One would have been motivated to optimizing multi-class multimedia data classification by leveraging negative multimedia data items to train classifiers (Hua, [0004]).
Pham and Hua fail to explicitly disclose prevent the user from accessing the computer resource when either the respective selected label of the first animation is not the correct label for the first animation and the respective selected label of the second animation is not the respective correct label for the second animation
However, in an analogous art, Gargi discloses prevent the user from accessing the computer resource when either the respective selected label of the first animation is not the correct label for the first animation and the respective selected label of the second animation is not the respective correct label for the second animation, (Gargi discloses Col. 8, Lines 37-67; Col. 9, Lines 1-6 receiving user input/response to a video CAPTCHA query and the CAPTCHA system determine whether the user is human based on a correct response. The system issues more tests when the answers are incorrect by multiple attempts controlled by a counter [preventing the user from access the computer resource])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Gargi with the method/system of Pham and Hua to include prevent the user from accessing the computer resource when either the respective selected label of the first animation is not the correct label for the first animation and the respective selected label of the second animation is not the respective correct label for the second animation and associate the respective selected label for the third animation with the third animation in the one or more memories as a potential label for the third animation in response to allowing the user to access the computer resource. One would have been motivated to automatically generate video-based tests to distinguish human users from computer software agents in a communications network (Gargi, Col. 1, Lines 6-10).
Pham, Hua and Gargi fail to explicitly disclose and associate the respective selected label for the third animation with the third animation in the one or more memories as a potential label for the third animation in response to allowing the user to access the computer resource.
However, in an analogous art, Kluever discloses and associate the respective selected label for the third animation with the third animation in the one or more memories as a potential label for the third animation in response to allowing the user to access the computer resource, (Kluever discloses Abstract, Page 2, Left Column users provide labels (tags) per video CAPTCHA challenge. Page 3, Under Section 3. Challenge Generation, Page 5, Under Section 4.2 Users must provide sufficient tag matches to ground truth to pass the challenge. Page 4, Under Section 4. Grading Function, Failure the match enough tags results in challenge failure. Page 2, Left Column, Page 3, Under Challenge Generation, Page 4, Under Rejecting Frequent Tags, User tags are ground truth approximated by frequency and relates them to videos; Also see Pages 1-10).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Kluever with the method/system of Pham, Hua and Gargi to include and associate the respective selected label for the third animation with the third animation in the one or more memories as a potential label for the third animation in response to allowing the user to access the computer resource. One would have been motivated to provide a technique for using content-based video labeling task as a captcha (Kluever, Abstract, Pages 1-10).
Regarding claim 4, Pham, Hua, Gargi and Kluever disclose the apparatus of claim 1.
Pham further discloses wherein the first animation and the second animation are selected from a pool of labeled animations and the third animation is selected from a pool of unlabeled animations, (Pham, describes in Figures 1C-1F wherein the first animation and the second animation are selected from a pool of labeled animations as described as “car driving,” and the third animation is selected from a pool of unlabeled animations as described in Col. 9, Lines 63-67; Col. 10, Lines 1-22)
Regarding claim 5, Pham, Hua, Gargi and Kluever disclose the apparatus of claim 4.
Pham further discloses wherein the one or more processors, individually or in combination, are configured to add the third animation to the pool of labeled animations with the potential label as a correct label for the third animation in response to a number of potential labels for the third animation from a plurality of allowed users being the same and satisfying a threshold number, (Pham, Col. 18, Line 1 & 130, FIG 1C; Col. 9, Lines 25-37; Col. 15, Lines 1-15 describes wherein the one or more processors, individually or in combination, are configured to add the third animation to the pool of labeled animations with the potential label as a correct label for the third animation in response to a number of potential labels for the third animation from a plurality of allowed users being the same and satisfying a threshold number as described in Col. 9, Lines 25-37)
Regarding claim 6, Pham, Hua, Gargi and Kluever disclose the apparatus of claim 1.
Pham further discloses wherein to transmit the at least three animations for display to the user on a client device, the one or more processors, individually or in combination, are configured to transmit selectable labels for the first animation and the second animation, wherein the respective label for each of the first animation and the second animation are selectable by the user from the selectable labels, (Pham, Figures 1C-1F; Col. 14, Lines 62-67; Col. 15, Lines 1-15, FIG 4, describes wherein to transmit the at least three animations for display to the user on a client device, the one or more processors, individually or in combination, are configured to transmit selectable labels for the first animation and the second animation, wherein the respective label for each of the first animation and the second animation are selectable by the user from the selectable labels)
Hua further discloses wherein the selectable labels include the respective correct label for the first animation and the respective correct label for the second animation to be displayed with at least two incorrect labels for each of the first animation and the second animation, (Hua, [0120], [0020], [0085] describes wherein the selectable labels include the respective correct label for the first animation and the respective correct label for the second animation to be displayed with at least two incorrect labels for each of the first animation and the second animation as described in [0035], [0074])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hua with the method/system of Pham to include wherein the selectable labels include the respective correct label for the first animation and the respective correct label for the second animation to be displayed with at least two incorrect labels for each of the first animation and the second animation. One would have been motivated to optimizing multi-class multimedia data classification by leveraging negative multimedia data items to train classifiers (Hua, [0004]).
Regarding claim 7, Pham, Hua, Gargi and Kluever disclose the apparatus of claim 1.
Pham further discloses wherein the respective selected label for the third animation is a text entry generated by the user, (Pham describes in Col. 11, Lines 3-8 & 32-52 wherein the input of the respective selected label for the third animation is a text entry generated by the user)
Regarding claim 8, Pham, Hua, Gargi and Kluever disclose the apparatus of claim 7.
Pham further discloses wherein the one or more processors, individually or in combination, are configured to perform statistical clustering of text entries of potential labels for the third animation from a plurality of users to select a correct label for the third animation, (Pham, describes in Col. 18, Line 1; Col. 9, Lines 4-24, Col. 15, Lines 1-1-15 wherein the one or more processors, individually or in combination, are configured to perform statistical clustering of text entries of potential labels for the third animation from a plurality of users to select a correct label for the third animation as described in Col. 10, Lines 51-67; Col. 12, Lines 1-8)
Regarding claim 9, Pham, Hua, Gargi and Kluever disclose the apparatus of claim 1.
Pham further discloses wherein the one or more processors, individually or in combination, are configured to transmit selectable labels for the third animation, wherein the selectable labels include randomly generated combinations of actors and actions or potential labels for the third animation that were previously received from other users, (Pham describes in Column 18, Line 1, Col. 9, Lines 49-62; Col. 10, Lines 7-35 wherein the one or more processors, individually or in combination, are configured to transmit selectable labels for the third animation, wherein the selectable labels include randomly generated combinations of actors and actions or potential labels for the third animation that were previously received by other users as described in Figures 1B-1F)
Regarding claim 10, Pham, Hua, Gargi and Kluever disclose the apparatus of claim 8.
Pham further discloses wherein the one or more processors, individually or in combination, are configured to associate a potential label for the third animation as a correct label for the third animation in response to the potential label being selected by a threshold number of users, (Pham, Col. 18, Line 1 & 130, FIG 1C; Col. 9, Lines 25-37 describes wherein the one or more processors, individually or in combination, are configured to associate a potential label for the third animation as a correct label for the third animation in response to the potential label being selected by a threshold number of users)
Regarding claim 11, Pham, Hua, Gargi and Kluever disclose the apparatus of claim 1.
Gargi further discloses wherein the one or more processors, individually or in combination, are configured to prevent the user from accessing the computer resource in response to either the respective selected label of the first animation not being the correct label for the first animation or the respective selected label of the second animation not being the respective correct label for the second animation, (Gargi discloses Col. 8, Lines 37-67; Col. 9, Lines 1-6 receiving user input/response to a video CAPTCHA query and the CAPTCHA system determine whether the user is human based on a correct response. The system issues more tests when the answers are incorrect by multiple attempts controlled by a counter [preventing the user from access the computer resource])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Gargi with the method/system of Pham and Hua to include prevent the user from accessing the computer resource when either the respective selected label of the first animation is not the correct label for the first animation and the respective selected label of the second animation is not the respective correct label for the second animation and associate the respective selected label for the third animation with the third animation in the one or more memories as a potential label for the third animation in response to allowing the user to access the computer resource. One would have been motivated to automatically generate video-based tests to distinguish human users from computer software agents in a communications network (Gargi, Col. 1, Lines 6-10).
Claims 2-3 are rejected under 35 U.S.C. 103 as being unpatentable over Pham et al (“Pham,” US 10,838,066), Hua et al (“Hua,” US 20160217349), Gargi et al (“Gargi,” US 8,510,795) in view of Kluever et al (“Kluever,” “Balancing Usability and Security in a Video CAPTCHA,” 2009, Pages 1-10) and further in view of Ford et al (“Ford,” US 20220303272).
Regarding claim 2, Pham. Hua, Gargi and Kluever disclose the apparatus of claim 1.
Pham. Hua, Gargi and Kluever fail to explicitly disclose wherein the respective correct label includes an indication of an actor in the animation and an indication of an action in the animation.
However, in an analogous art, Ford discloses wherein the respective correct label includes an indication of an actor in the animation and an indication of an action in the animation, (Ford, describes [0167] wherein the respective correct label includes an indication of an actor in the animation and an indication of an action in the animation [0166], [0171]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ford with the method/system of Pham, Hua, Gargi and Kluever to include wherein the actor in the first animation is different than the actor in the second animation. One would have been motivated to present challenges to users that require inference to real-world properties (Ford, [0001]).
Regarding claim 3, Pham. Hua, Gargi and Kluever disclose the apparatus of claim 2.
Ford further discloses wherein the actor in the first animation is different than the actor in the second animation, (Ford, describes in [0167] wherein the actor in the first animation is different than the actor in the second animation [0166], [0171])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ford with the method/system of Pham, Hua, Gargi and Kluever to include wherein the actor in the first animation is different than the actor in the second animation. One would have been motivated to present challenges to users that require inference to real-world properties (Ford, [0001]).
Claims 12 and 14-20, are rejected under 35 U.S.C. 103 as being unpatentable over Pham et al (“Pham,” US 10,838,066) in view of Hua et al (“Hua,” US 20160217349) and further in view of Kluever et al (“Kluever,” “Balancing Usability and Security in a Video CAPTCHA,” 2009, Pages 1-10).
Regarding claim 12, Pham discloses a method of verifying whether to allow a user access to a computer resource, comprising:
receiving, from the client device, a respective selected label for each of the at least three animations; (Pham, describes in FIG 1C receive, from the client device, (Col. 7, Lines 40-44) an input of a respective selected label (Col. 1, Lines 19-38, FIG 1C) for each of the at least three animations (Col. 6, Lines 1-13)
allow the user to access the computer resource in response to the respective selected label of the first animation being the respective correct label for the first animation and the respective selected label of the second animation being the respective correct label for the second animation; (Pham, describes allow access (Col. 15, Lines 2-5) to the user based at least in part on the respective selected label of the first animation (Col. 6, Lines 1-13, FIG 1C) being the respective correct label (Col. 10, Lines 51-67; Col. 11, Lines 1-9) for the first animation (Col. 6, Lines 1-13) and the respective selected label of the second animation (Col. 6, Lines 1-13) being the respective correct label (Col. 10, Lines 51-67; Col. 11, Lines 1-9) for the second animation (Col. 6, Lines 1-13)
Pham fails to explicitly disclose transmitting at least three animations for display to the user on the client device, wherein at least a first animation and a second animation of the at least three animations are associated with a respective corrective label and at least a third animation of the animations is not associated with a correct label; (Hua, [0120], [0020], [0085] sending animations to display on a client device wherein the animations are associated with an incorrect label as described in [0035], [0074])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hua with the method/system of Pham to include transmitting at least three animations for display to the user on the client device, wherein at least a first animation and a second animation of the at least three animations are associated with a respective corrective label and at least a third animation of the animations is not associated with a correct label. One would have been motivated to optimizing multi-class multimedia data classification by leveraging negative multimedia data items to train classifiers (Hua, [0004]).
Pham and Hua fail to explicitly disclose and associating the respective selected label for the third animation with the third animation as a potential label for the third animation in response to allowing the user to access the computer resource.
However, in an analogous art, Kluever discloses and associating the respective selected label for the third animation with the third animation as a potential label for the third animation in response to allowing the user to access the computer resource. , (Kluever discloses Abstract, Page 2, Left Column users provide labels (tags) per video CAPTCHA challenge. Page 3, Under Section 3. Challenge Generation, Page 5, Under Section 4.2 Users must provide sufficient tag matches to ground truth to pass the challenge. Page 4, Under Section 4. Grading Function, Failure the match enough tags results in challenge failure. Page 2, Left Column, Page 3, Under Challenge Generation, Page 4, Under Rejecting Frequent Tags, User tags are ground truth approximated by frequency and relates them to videos; Also see Pages 1-10).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Kluever with the method/system of Pham and Hua to include and associating the respective selected label for the third animation with the third animation as a potential label for the third animation in response to allowing the user to access the computer resource. One would have been motivated to provide a technique for using content-based video labeling task as a captcha (Kluever, Abstract, Pages 1-10).
Regarding claim 14, Pham, Hua and Kluever disclose the method of claim 12.
Pham further discloses wherein the first animation and the second animation are selected from a pool of labeled animations and the third animation is selected from a pool of unlabeled animations, the method further comprising adding the third animation to the pool of labeled animations with the potential label as a correct label for the third animation in response to a number of potential labels for the third animation from a plurality of allowed users being the same and satisfying a threshold number, (Pham, describes wherein the first animation (Col. 6, Lines 1-13) and the second animation (Col. 6, Lines 1-13) are selected from a pool of labeled animations (Col. 9, Lines 63-67; Col. 10, Lines 1-22) and the third animation is selected from a pool of unlabeled animations (Col. 9, Lines 63-67; Col. 10, Lines 1-22),
the method further comprising adding the third animation to the pool of labeled animations with the potential label as a correct label for the third animation in response to a number of potential labels for the third animation from a plurality of allowed users being the same and satisfying a threshold number (Pham, Col. 18, Line 1 & 130, FIG 1C; Col. 9, Lines 25-37 describes adding the third animation to the pool of labeled animations with the potential label as a correct label for the third animation in response to a number of potential labels for the third animation from a plurality of allowed users being the same and satisfying a threshold number)
Regarding claim 15, Pham, Hua and Kluever disclose the method of claim 12.
Pham further discloses wherein transmitting at least three animations for display to the user on a client device comprises transmitting selectable labels for the first animation and the second animation, wherein the user selects the respective selected label for each of the first animation and the second animation from the selectable labels, (Pham, Figures 1C-1F; Col. 14, Lines 62-67; Col. 15, Lines 1-15, FIG 4, describes wherein to transmit the at least three animations for display to the user on a client device, the one or more processors, individually or in combination, are configured to transmit selectable labels for the first animation and the second animation, wherein the user selects the respective selected label for each of the first animation and the second animation from the selectable labels)
Hua further discloses wherein the selectable labels include the respective correct label for the first animation and the respective correct label for the second animation to be displayed with at least two incorrect labels for each of the first animation and the second animation, (Hua, [0120], [0020], [0085] describes wherein the selectable labels include the respective correct label for the first animation and the respective correct label for the second animation to be displayed with at least two incorrect labels for each of the first animation and the second animation as described in [0035], [0074])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hua with the method/system of Pham to include wherein the selectable labels include the respective correct label for the first animation and the respective correct label for the second animation to be displayed with at least two incorrect labels for each of the first animation and the second animation. One would have been motivated to optimizing multi-class multimedia data classification by leveraging negative multimedia data items to train classifiers (Hua, [0004]).
Regarding claim 16, Pham, Hua and Kluever disclose the method of claim 12.
Pham further discloses wherein the respective selected label for the third animation is a text entry generated by the user, the method, further comprising performing statistical clustering of text entries of potential labels for the third animation from a plurality of users to select a correct label for the third animation, (Pham describes in Col. 11, Lines 3-8 & 32-52 describes wherein the respective selected label for the third animation is a text entry generated by the user, (Col. 18, Line 1; Col. 9, Lines 4-24 describes the method, further comprising performing statistical clustering of text entries of potential labels for the third animation from a plurality of users to select a correct label for the third animation as described in Col. 10, Lines 51-67; Col. 12, Lines 1-8)
Regarding claim 17, Pham, Hua and Kluever disclose the method of claim 12.
Pham further discloses further comprising transmitting selectable labels for the third animation, wherein the selectable labels include randomly generated combinations of actors and actions or potential labels for the third animation that were previously received from other users, (Pham describes in Column 18, Line 1, Col. 9, Lines 49-62; Col. 10, Lines 7-35 describes further comprising transmitting selectable labels for the third animation, wherein the selectable labels include randomly generated combinations of actors and actions or potential labels for the third animation that were previously received from the other users as described in Figures 1B-1F)
Regarding claim 18, Pham, Hua and Kluever disclose the method of claim 17.
Pham further discloses further comprising associating a potential label for the third animation as a correct label for the third animation in response to the potential label being selected by a threshold number of users, (Pham, Col. 18, Line 1 & 130, FIG 1C; Col. 9, Lines 25-37 describes further comprising associating a potential label for the third animation as a correct label for the third animation in response to the potential label being selected by a threshold number of users)
Regarding claim 19, Pham, Hua and Kluever disclose the method of claim 12.
Pham further discloses wherein allowing the user to access the computer resource is also based on whether the respective selected label for each of the at least three animations is received within a time limit (Pham, describes Col. 18, Line 1 wherein the one or more processors, individually or in combination, are configured to allowing the user to access the computer resource is also based on whether (Col. 15, Lines 1-15) the respective selected label for each of the at least three animations (Col. 10, Lines 51-67; Col. 11, Lines 1-8) is received within a time limit (Col. 12, Lines 19-34)
Regarding claim 20, Pham discloses a non-transitory computer-readable medium storing computer executable instructions for verifying whether to allow a user access to a computer resource, wherein execution of the instructions by a processor cause the processor to:
receive, from the client device a respective selected label for each of the at least three animations; (Pham, describes in FIG 1C receive, from the client device, (Col. 7, Lines 40-44) an input of a respective selected label (Col. 1, Lines 19-38, FIG 1C) for each of the at least three animations (Col. 6, Lines 1-13)
allow the user to access the computer resource in response to the respective selected label of the first animation being the respective correct label for the first animation and the respective selected label of the second animation being the respective correct label for the second animation; (Pham, describes allow access (Col. 15, Lines 2-5) to the user based at least in part on the respective selected label of the first animation (Col. 6, Lines 1-13, FIG 1C) being the respective correct label (Col. 10, Lines 51-67; Col. 11, Lines 1-9) for the first animation (Col. 6, Lines 1-13) and the respective selected label of the second animation (Col. 6, Lines 1-13) being the respective correct label (Col. 10, Lines 51-67; Col. 11, Lines 1-9) for the second animation (Col. 6, Lines 1-13)
Pham fails to explicitly disclose transmit at least three animations for display to the user on a client device, wherein at least a first animation and a second animation of the at least three animations are associated with a respective correct label and at least a third animation is not associated with a correct label.
However, in an analogous art, Hua discloses transmit at least three animations for display to the user on a client device, wherein at least a first animation and a second animation of the at least three animations are associated with a respective correct label and at least a third animation is not associated with a correct label; (Hua, [0120], [0020], [0085] sending animations to display on a client device wherein the animations are associated with an incorrect label as described in [0035], [0074])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hua with the method/system of Pham to include transmit at least three animations for display to the user on a client device, wherein at least a first animation and a second animation of the at least three animations are associated with a respective correct label and at least a third animation is not associated with a correct label. One would have been motivated to optimizing multi-class multimedia data classification by leveraging negative multimedia data items to train classifiers (Hua, [0004]).
Pham and Hua fail to explicitly disclose and associated the respective selected label for the third animation with the third animations a potential label for the third animation in response to allowing the user to access the computer resource.
However, in an analogous art, Kluever discloses and associated the respective selected label for the third animation with the third animations a potential label for the third animation in response to allowing the user to access the computer resource, (Kluever discloses Abstract, Page 2, Left Column users provide labels (tags) per video CAPTCHA challenge. Page 3, Under Section 3. Challenge Generation, Page 5, Under Section 4.2 Users must provide sufficient tag matches to ground truth to pass the challenge. Page 4, Under Section 4. Grading Function, Failure the match enough tags results in challenge failure. Page 2, Left Column, Page 3, Under Challenge Generation, Page 4, Under Rejecting Frequent Tags, User tags are ground truth approximated by frequency and relates them to videos; Also see Pages 1-10).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Kluever with the method/system of Pham and Hua to include and associated the respective selected label for the third animation with the third animations a potential label for the third animation in response to allowing the user to access the computer resource. One would have been motivated to provide a technique for using content-based video labeling task as a captcha (Kluever, Abstract, Pages 1-10).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Pham et al (“Pham,” US 10,838,066), Hua et al (“Hua,” US 20160217349), in view of Kluever et al (“Kluever,” “Balancing Usability and Security in a Video CAPTCHA,” 2009, Pages 1-10) and further in view of Ford et al (“Ford,” US 20220303272).
Regarding claim 13, Pham, Hua and Kluever disclose the method of claim 12.
Pham, Hua and Kluever fail to explicitly disclose wherein the respective correct label includes an indication of an actor in the animation and an indication of an action in the animation, wherein the actor in the first animation is different than the actor in the second animation.
However, in an analogous art, Ford discloses wherein the respective correct label includes an indication of an actor in the animation and an indication of an action in the animation, wherein the actor in the first animation is different than the actor in the second animation, (Ford, describes [0167] wherein the respective correct label includes an indication of an actor in the animation and an indication of an action in the animation, [0166], [0171] wherein the actor in the first animation is different than the actor in the second animation [0166], [0171])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Ford with the method/system of Pham, Hua and Kluever to include wherein the respective correct label includes an indication of an actor in the animation and an indication of an action in the animation, wherein the actor in the first animation is different than the actor in the second animation. One would have been motivated to present challenges to users that require inference to real-world properties (Ford, [0001]).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES J WILCOX whose telephone number is (571)270-3774. The examiner can normally be reached M-F: 8 A.M. to 5 P.M..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Luu T. Pham can be reached on (571)270-5002. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAMES J WILCOX/Examiner, Art Unit 2439
/LUU T PHAM/Supervisory Patent Examiner, Art Unit 2439