DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/06/2024 has been entered.
Claims 1, 3-7, 9-11, 14-17, 19-22 are pending.
Claims 2, 8, 12, 13 and 18 are cancelled.
Claims 1, 11, 19, 21, 22 are amended.
Response to Arguments
Applicant's arguments filed 08/19/2025 have been fully considered but they are not persuasive.
Applicant on pg. 12-13 of the remarks argues:
“The cited references, either alone or in combination, have not been shown in the Office Action to teach "selecting a hollow shape, wherein the hollow shape is procedurally generated … The cited portion of Isaacs, and the remainder of that publication and the other cited publications, are silent with respect to any hollow shape that is procedurally generated. As known in the art, the plain meaning of "procedural generation" refers to the algorithmic creation of content rather than the manual generation of content. Isaacs does not disclose any procedural generation, let alone procedural generation of hollow shapes. More specifically, the Office Action appears to map the shape of claim 1 to the puzzle pieces of Isaacs. Office Action, p. 18. In order for Isaacs to disclose the above limitation, it would have to teach puzzle pieces that are hollow and that are procedurally generated. However, Isaacs does not disclose such features.”
The examiner respectfully disagrees. As acknowledged by applicant, “procedural generation” in this art merely means algorithmic generation rather than manual authoring. Isaacs expressly teaches that puzzles are generated from “individually randomly generated puzzle features, such as puzzle surface image, image color gradient, puzzle component shape, or other puzzle features,” which a person of ordinary skill would understand as computer-implemented, algorithmic generation of puzzle component shapes. Those algorithmically generated shapes are then used to define puzzle pieces and the corresponding missing regions in the base image. Thus, under the broadest reasonable interpretation, Isaacs teaches procedurally (algorithmically) generated puzzle component shapes and the use of those shapes to form missing regions—i.e., hollow blocks—in an image. When combined with Cheng’s stylized CAPTCHA images, a person of ordinary skill would have found it obvious to procedurally generate puzzle component shapes, remove corresponding regions from a neural style transferred image, and present those hollow regions and matching pieces as part of a CAPTCHA challenge. Applicant’s arguments again improperly attempt to narrow the claims by importing unclaimed implementation details from the specification (e.g., a particular hollow-block datastore or carving engine), which is contrary to the controlling claim-construction standard.
As to the implication that the claim does not teach puzzle pieces that are hollow, the claim does not require a hollow block to be a three-dimensional cavity or a debossed structure, otherwise the applicant would have specifically required 3D shaped. Under the broadest reasonable interpretation, a hollow block is simply a location in an image from which content has been removed and which corresponds to a missing piece. Isaacs teaches removal of puzzle pieces to create empty receiving regions that must be filled by the correct piece. These regions are hollow blocks under the ordinary meaning of the term. The combination of Isaacs’ missing regions and Kubendran’s procedural generation yields procedurally generated hollow shapes without requiring Isaacs to expressly disclose the whole limitation.
Applicant argues on page 13:
“Furthermore, the cited references, either alone or in combination, have not been shown in the Office Action to teach "carving a hollow block at each of the multiple locations in the neural style transferred image." More specifically, the Office Action incorrectly asserts that the following paragraph of Isaacs discloses this limitation (Office Action, p. 19): The cited portion of Isaacs, and the remainder of that publication and the other cited publications, are silent with respect to carving any block, let alone a hollow block. Office Action, p. 19. In order for Isaacs to disclose the above limitation, it would have to teach carving hollow puzzle pieces. However, Isaacs does not disclose carving at all, let alone carving hollow puzzle pieces.
The Examiner respectfully disagrees. The rejection does not require that the exact word “carving” appear in the prior art. Under the broadest reasonable interpretation, in the context of image processing and puzzle CAPTCHAs, “carving a hollow block” reasonably encompasses algorithmically removing or masking a portion of an image to create a missing region that can later be filled. Isaacs discloses incomplete jigsaw puzzles that are “randomly generated” and that “have one or more pieces missing,” where the missing regions are formed by removing puzzle pieces from a complete image. A person of ordinary skill in the art would understand that this removal operation creates voids or hollow regions in the underlying image, i.e., carving out blocks from the image under BRI.
Cheng teaches applying neural style transfer to images used in CAPTCHAs, including generating stylized background images and icons that constitute the CAPTCHA content. A person of ordinary skill in the art would have found it obvious to apply Isaacs’ removal operation to the stylized CAPTCHA images of Cheng, thereby producing hollow regions in neural style transferred images. The applicant’s arguments rely on importing a narrower, specification-specific meaning of “carving” that requires explicit use of that term or a particular implementation, which is not required under BRI.
Applicant also argues through that the combination is essentially silent regarding the creation of hollow regions in stylized images and that the Office Action improperly maps puzzle pieces of Isaacs to hollow blocks of the claims. The examiner respectfully disagrees. The applicant’s construction requires adding specification limitations into the claims. The reference teach removing image components to create voids, generating variable image shapes procedurally, and applying neural style transfer to CAPTCHA images. These teachings operate toward a shared purpose of generating puzzles that resist automated solving. A person of ordinary skill would understand that combined stylized images from Cheng with the puzzle removal mechanism of Isaacs is a predictable and straightforward adaptation. The argument that the Office Action must identify a reference using identical terminology does not align with the standard for obviousness. The applicant’s position again underscores the ambiguous nature of procedural generation because the specification does not describe what specific type of algorithmic process is intended to qualify.
Applicant concludes that because the cited art does not expressly disclose procedurally generated hollow shapes or express carving language, the rejection should be withdrawn. The examiner respectfully disagrees. The limitations at issue are satisfied under the broadest reasonable interpretation when the references are properly considered together. The combination of Cheng, Isaacs, Kubendran, Conti, and Aleksandrovich teaches neural style transferred images used in CAPTCHAs, removal of puzzle pieces to create hollow regions, procedural generation of image shapes, and cursor driven placement of puzzle components. The applicant’s arguments do not identify in the factual findings or in the articulated rationale. The rejection therefore remains appropriate.
Therefore, rejection under 35 USC § 103 of 03/20/2025 is maintained.
Regarding Claim Interpretation of Claim 21, Claim does not recite any structure to perform claimed functions. Claim recites “A system comprising” which is not sufficient structure to perform claimed functions. Appropriate corrections are required. Therefore, examiner still believes claims invoke 112(f) and the corresponding interpretation of 03/20/2025 is maintained.
Claim Interpretation
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Claim 21 recites “means” plus “function” as below:
“a means for providing a neural transfer image…”, “a means for generating a puzzle…”, “a means for generating a CAPTCH…”, “a means for performing the CAPTCH…”, “a means for…linking the missing block…”, “a means for moving the missing block…”, “ a means for sending a CAPTCHA answer…”, “a means for validating the CAPTCH…”, “a means for selecting…”, “a means for identifying multiple locations…”, “a means for carving a hollow block…”, “a means for selecting a missing block…”. Therefore Claim 21 invokes 112(f).
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 7, 10, 11, 12, 17 and 20-22 are rejected under 35 U.S.C. 103 as being unpatentable over Aleksandrovich et al (US 2012/0323700 A1), and in view of Kubendran (U.S. PGPub. No. 2021/0142454) (hereinafter “Kubendran”) and “Cheng, Z., Gao, H., Liu, Z., Wu, H., Zi, Y. and Pei, G. (2019), Image-Based CAPTCHAs based on neural style transfer”; and in further view of Conti et al (U. S. Pat. No 10,387,645 B2) (Hereinafter “Conti”) and Isaacs (U. S. Pat. No. 8,671,058 B1) (hereinafter “Isaacs”).
Regarding Claim 1, Aleksandrovich teaches:
A system comprising (Aleksandrovich: [0031], a system for providing CAPTCHA security to websites comprising):
one or more processors (Aleksandrovich: [0031], 1) a client device having a processor and a storage medium including machine readable instructions that when executed by a client cause the client device to load a webpage, including a CAPTCHA challenge; (2) a CAPTCHA server having a processor and a storage medium including machine readable instructions that when executed are capable of performing the steps);
and memory storing instructions that, when executed by the one or more processors, cause the system to perform (Aleksandrovich: [0031], (3) a secured website server having a processor and a computer readable storage medium including machine readable instructions that when executed perform the steps of):
a CAPTCHA server coupled to the CAPTCHA puzzle generator, wherein, in operation (Aleksandrovich: [0016] The present invention is generally directed to a method for remote verification of human interaction comprising the steps of receiving a request for a CAPTCHA challenge with a CAPTCHA server; generating the CAPTCHA challenge; generating a unique identifier related to the CAPTCHA challenge; and storing a CAPTCHA challenge solution on a CAPTCHA server):
to the CAPTCHA puzzle generator (Aleksandrovich: [0006], provides for One common way to differentiate a human from a computer is by a test known as a "Turing test." When a computer program is able to generate the Turing test (=CAPTCHA generator) and evaluate the results, it is typically known as a CAPTCHA (completely automated public test to tell computer and humans apart) program. In addition to the general desire not have certain portions, functions, areas, content or privileges of a website freely available to automated systems, many websites use CAPTCHA programs to prevent attacks by malicious programs, including those that are designed to disrupt service on a large scale. [0020] The CAPTCHA challenge may include an image or graphical representation, which may instruct the client on how to manipulate the graphical elements and wherein the image is capable of being manipulated to match the graphical representation of the CAPTCHA challenge solution. [0021] The CAPTCHA challenge generally includes a graphical representation and graphical elements and at least one of the graphical representation and graphical elements may be distorted such that the graphical elements created from the graphical representation are no longer identical, and when a client solution is assembled, it includes differences between the assembled graphical elements and the graphical representation. The challenge solution stored on the CAPTCHA server includes the graphical coordinates of the graphical elements, such as the graphical coordinates of the assembled graphical elements when the match the graphical representation or desired solution).
the CAPTCHA puzzle generator generates a puzzle by associating multiple hollow blocks (Aleksandrovich: [0020]: provides for the edges of the graphical elements may intentionally overlap, include spaces or other misalignments. [0031], provides for generating a CAPTCHA challenge having a graphical representation and at least one graphical element that is capable of being rearranged. [0063] The CAPTCHA challenge 24 is typically presented to the user of the website within a specified area on the website page, such as in the exemplary box 10. Although the CAPTCHA challenge 24 is illustrated as being presented in a box 10, it may be easily displayed on the webpage without the box 10 or in a variety of other settings. As used herein the terms box, area and space occupied by the moveable pieces of the challenge may be used interchangeably. The box 10 generally contains a challenge 24, such as a puzzle, having a graphical representation 22 of the desired solution, and at least one graphical element 20 requiring manipulation or assembly, such as the illustrated puzzle pieces in in FIGS. 1-3).
the CAPTCHA server, which includes a CAPTCHA generator that generates a CAPTCHA comprised of (Aleksandrovich: [0016], generating the CAPTCHA challenge. [0019], provides for the CAPTCHA challenge is configured to include a graphical representation and graphical elements which are capable of being rearranged to match the graphical representation. The graphical representation is used to generate graphical elements and wherein at least one of the graphical representations and the graphical elements (=Correct hollow block) are manipulated by at least one process of enlargement, rotation, shifting, or overlaying on different backgrounds. The graphical elements include edges which when arranged to match the graphical representation, may not be aligned. For example, gaps, overlays and other variances may be intentionally added.),
a missing block (Aleksandrovich: [0071] FIGS. 4-5 and 9-12 provide other types of assembly puzzles and are provided as only exemplary style puzzles. In FIGS. 4 and 5, the client is presented with partial images (=partial image contains missing hollow block (=blank space) shown in FIG. 7) on the background and then assembles or manipulates the graphical elements 20, such as the various butterflies, depending upon shape and color).
and an incorrect hollow block (Aleksandrovich: [0064]: To increase the difficulty of the challenge 24 for automated systems, a background 12, such as additional completed butterflies or portions of butterflies occurring in the background 12 but not part of the graphical elements 20, may be included (=incorrect hollow block)
performs the CAPTCHA in association with a CAPTCHA client device (Aleksandrovich: [0062], provides for to obtain access to the desired website area, functionality or content, the client or user must solve a CAPTCHA challenge. Exemplary CAPTCHA challenges of the present invention are illustrated in FIGS. 1-21).
Alesandrovich does not explicitly disclose:
However, in an analogous art, Kubendran teaches:
(Kubendran: [0018] The system includes several modules that facilitate the generation of graphic design patterns. In FIG. 2, a block diagram shows a system 200 according to an example embodiment. The system 200 takes two input images, a content image 202 and a style image 204. The content image 202 could be a silhouette and/or a picture from which a silhouette can be extracted, e.g., via image to silhouette convertor 210. The style image 204 could be a pattern, picture, or any other image that has some desirable stylistic properties. Optional image-to-silhouette 210 module and procedural generation 212 module can be selected if the input images are pictures (e.g., photographic images) instead of a silhouette and a pattern. These are input to a neural style transfer processor 2056 that processes the input images 202, 204 to produce output images 206, 208 (=the neural style transferred images)).
(Kubendran: [0018] The system includes several modules that facilitate the generation of graphic design patterns. In FIG. 2, a block diagram shows a system 200 according to an example embodiment. The system 200 takes two input images, a content image 202 and a style image 204. The content image 202 could be a silhouette and/or a picture from which a silhouette can be extracted, e.g., via image to silhouette convertor 210. The style image 204 could be a pattern, picture, or any other image that has some desirable stylistic properties. Optional image-to-silhouette 210 module and procedural generation 212 module can be selected if the input images are pictures (e.g., photographic images) instead of a silhouette and a pattern. These are input to a neural style transfer processor 2056 that processes the input images 202, 204 to produce output images 206, 208 (=the neural style transferred images))
(Kubendran: [0018] The system includes several modules that facilitate the generation of graphic design patterns. In FIG. 2, a block diagram shows a system 200 according to an example embodiment. The system 200 takes two input images, a content image 202 and a style image 204. The content image 202 could be a silhouette and/or a picture from which a silhouette can be extracted, e.g., via image to silhouette convertor 210. The style image 204 could be a pattern, picture, or any other image that has some desirable stylistic properties. Optional image-to-silhouette 210 module and procedural generation 212 module can be selected if the input images are pictures (e.g., photographic images) instead of a silhouette and a pattern. These are input to a neural style transfer processor 2056 that processes the input images 202, 204 to produce output images 206, 208 (=the neural style transferred images))
(Kubendran: [0037], A neural network 930 (e.g., a deep, convolutional neural network) uses the silhouette image to produce content feature layers and uses the style image to produce pattern feature layers. The content feature layers and pattern feature layers can be stored in the memory 904 and or persistent storage 906 (=content images and style image datastore)).
an image selection engine for selecting a content image from the content image datastore and a style image from the style image datastore (Kubendran: [0034] & [0037]In FIG. 8, a flowchart shows a method according to an example embodiment. The method involves inputting 800 a silhouette image to a deep convolutional neural network to produce content feature layers. A style image is input 801 to the deep convolutional neural network to produce pattern feature layers. The content feature layers and the pattern feature layers from the deep convolutional neural network are combined 802 to obtain an output image. The output image includes an abstraction of the style image within confines of the silhouette image. The output image is utilized 803 in a graphic design product),
and a neural style transfer engine for combining the content image and the style image into the neural style transferred image (Kubendran: [0018] The system includes several modules that facilitate the generation of graphic design patterns. In FIG. 2, a block diagram shows a system 200 according to an example embodiment. The system 200 takes two input images, a content image 202 and a style image 204. The content image 202 could be a silhouette and/or a picture from which a silhouette can be extracted, e.g., via image to silhouette convertor 210. The style image 204 could be a pattern, picture, or any other image that has some desirable stylistic properties. Optional image-to-silhouette 210 module and procedural generation 212 module can be selected if the input images are pictures (e.g., photographic images) instead of a silhouette and a pattern. These are input to a neural style transfer processor 2056 that processes the input images 202, 204 to produce output images 206, 208)
the neural style transferred image (Kubendran: [0018] The system includes several modules that facilitate the generation of graphic design patterns. In FIG. 2, a block diagram shows a system 200 according to an example embodiment. The system 200 takes two input images, a content image 202 and a style image 204. The content image 202 could be a silhouette and/or a picture from which a silhouette can be extracted, e.g., via image to silhouette convertor 210. The style image 204 could be a pattern, picture, or any other image that has some desirable stylistic properties. Optional image-to-silhouette 210 module and procedural generation 212 module can be selected if the input images are pictures (e.g., photographic images) instead of a silhouette and a pattern. These are input to a neural style transfer processor 2056 that processes the input images 202, 204 to produce output images 206, 208 (=the neural style transferred images)).
It would be obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to modify Aleksandrovich’s method of generating, performing and solving CAPTCHA puzzle by applying Kubendran’s method of combining content image and style image in order to produce a neural style transfer image. The motivation is to create an automated graphic design (Kubendran: [0015]).
The combination of Aleksandrovich in view of in view of Kubendran does not explicitly teaches:
and applies the puzzle
However, in an analogous art, “Cheng, Z., Gao, H., Liu, Z., Wu, H., Zi, Y. and Pei, G. (2019), Image-Based CAPTCHAs based on neural style transfer” teaches:
and applies the puzzle (Image-Based CAPTCHA’s: [abstract], In this study, the authors apply neural style transfer to enhance the security for CAPTCHA design. [Page no. 521, Col 2, section 4.1, para 3], The final CAPTCHA image is generated (=creating CAPTCHA) via a combination of background image and icons with random locations. Both the background image and the icons are generated using the same style. In this case, the texture of the foreground is very similar to that of background, hence, it is more difficult for attackers to segment the target icons);
A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Aleksandrovich in view of in view of Kubendran by applying the well-known technique as disclosed by “Cheng, Z., Gao, H., Liu, Z., Wu, H., Zi, Y. and Pei, G. (2019), Image-Based CAPTCHAs based on neural style transfer” of creating CAPTCHA image in order to make difficult for attacker to segment the target image. The motivation is to make the challenge more difficult to prevent websites from malicious attack (“Cheng, Z., Gao, H., Liu, Z., Wu, H., Zi, Y. and Pei, G. (2019), Image-Based CAPTCHAs based on neural style transfer” : [Abstract]).
Aleksandrovich in view of Kubendran and “Cheng, Z., Gao, H., Liu, Z., Wu, H., Zi, Y. and Pei, G. (2019), Image-Based CAPTCHAs based on neural style transfer” does not explicitly teach:
when the CAPTCHA is displayed to a user of the CAPTCHA client device, the missing block is linked to a header of a range slider;
when the header is dragged to a correct location, the missing block is moved to an inline location of the correct hollow block on the image, wherein movement of the missing block is coordinated with movement of the header;
wherein the CAPTCHA client device sends a CAPTCHA answer that includes a slider distance position representing distance the header is moved;
and validating the CAPTCHA based on the slider distance position representing the distance the header is moved.
However, in analogous art, Conti teaches:
when the CAPTCHA is displayed to a user of the CAPTCHA client device, the missing block is linked to a header of a range slider (Conti: [Col 5, lines 40-44], (15) With the term “Sensibility” it is meant to refer to a parameter which indicates the intensity of dislocation of each geometric shape with respect to the cursor movement FIG. 1 shows a flow diagram of the method according to a suggested embodiment of the invention);
when the header is dragged to a correct location, the missing block is moved to an inline location of the correct hollow block on the image, wherein movement of the missing block is coordinated with movement of the header (Conti: [Col 6, lines 56-62], (27) Once the method visualized the distorted image on the display of an electronic terminal, and generated the coordinates of the solution position, the method provides to detect (step 105) the movement of a cursor within the test area. The cursor can be of any type, for example a pointer of a mouse, a pointer of an optic pen, the result of the pressure of a finger on a touch screen);
wherein the CAPTCHA client device sends a CAPTCHA answer that includes a slider distance position representing distance the header is moved (Conti: [Col 7, lines 54-60], (34) So the method detects the final position of the cursor (step 108), where the final position is the position of the cursor when the user input the control signal. In particular, every time the user moves the cursor during the test, the method provides to use the coordinates of the cursor (cur.sub.x, cur.sub.y) in the test area and uses them to compute, moment by moment, the position of each image portion using the following formulas: x.sup.i=m.sub.xx.sup.i.Math.cur.sub.x+m.sub.xy.sup.i.Math.cur.sub.y+C.sub.x.sup.i
y.sup.i=m.sub.yy.sup.i.Math.cur.sub.y+m.sub.yx.sup.i.Math.cur.sub.x+C.sub.y.sup.i);
and validating the CAPTCHA based on the slider distance position representing the distance the header is moved (Conti: [Col 7. Lines 66-76 to col 8, lines 1-25], (35) The user stops the movement of the cursor when the user believes that the cursor is in the final position (cur.sub.x.sup.f, cur.sub.y.sup.f) where the user recognizes the distribution of image portions inside the test area to be the original image. So such method provides that when the client terminal detects the final position of the cursor, it transmits the coordinates of the final position of the cursor to the server terminal, which accepts the coordinates of the final position (cur.sub.x.sup.f, cur.sub.y.sup.f). Subsequently, the server terminal compares such coordinates with the coordinates of the solution position (sol.sub.x, sol.sub.y) through a script therein implemented. This comparison occurs by comparing the euclidean distance between the final position and the solution position, and a predetermined threshold of tolerance. If such difference is less than the tolerance threshold, the method considers that the interaction with the electronic terminal is accomplished by a human, and therefore the user has passed the test).
A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Aleksandrovich in view of Kubendran and “Cheng, Z., Gao, H., Liu, Z., Wu, H., Zi, Y. and Pei, G. (2019), Image-Based CAPTCHAs based on neural style transfer” by applying the well-known technique as disclosed by Conti’s method of comparing and calculating the euclidean distance between the final position and the solution position, and therefore the user has passed the test. The motivation is to improve the ease of usability, level of security, the efficiency of a test for recognizing if the user of an electronic terminal is a human or a robot (Conti: [Col 2, lines 54-64]).
Aleksandrovich in view of Kubendran and “Cheng, Z., Gao, H., Liu, Z., Wu, H., Zi, Y. and Pei, G. (2019), Image-Based CAPTCHAs based on neural style transfer” and does not explicitly teach:
selecting a hollow shape, wherein the hollow shape is procedurally generated;
identifying multiple locations in the neural style transferred image for hollow block placement, wherein at least two of the multiple locations are fixed locations apart from each other;
carving a hollow block at each of the multiple locations in the neural style transferred image;
and selecting a missing block that matches a correct hollow block at one of the fixed locations in the neural style transferred image.
However, in a an analogous art, Isaacs teaches:
selecting a hollow shape, wherein the hollow shape is procedurally generated (Isaacs: [Col 13, Col 25-32], (80) The selection process for choosing the correct puzzle piece is based on puzzle shape and the image contained on the piece (S667). The puzzle piece options provided for the user to select from could include identical shapes but only one would have the correct image to complete the puzzle. Similarly the puzzle pieces provided for selection could all have the correct image but be of different shapes, with only one piece have the correct shape to complete the puzzle).
identifying multiple locations in the neural style transferred image for hollow block placement, wherein at least two of the multiple locations are fixed locations apart from each other (Isaacs : [FIG. 24], 816, 817 are the two blocks (only one is correct block) and the white space in 819 is the missing piece of the image. [Col 13, lines 33-45], (81) The user chooses a puzzle piece believed to be the correct missing piece (S669). The choosing action can be for example performed by clicking on the piece with the cursor controlled by the mouse or other pointing device of the user interface. If the user is viewing the puzzle captcha puzzle on a user interface with a touch sensitive screen, the choosing action can be performed by user tapping the correct puzzle piece with their finger or other device such as a stylus. If the user is using a mobile phone or other viewing device as a user interface where a pointing device is not present but a joystick or toggle control is supplied, the choosing action can be performed by operator navigates to the correct puzzle piece and selects the piece);
carving a hollow block at each of the multiple locations in the neural style transferred image (Isaacs: [Col 13, lines 22-24], provides for the user studies the provided puzzle and considers the pieces provided to complete the puzzle (S666));
and selecting a missing block that matches a correct hollow block at one of the fixed locations in the (Isaacs: [Col 11, lines 66-67-Col 12, lines 1-7], (73) As already mentioned above, in one implementation, the puzzle is a jigsaw puzzle (see for example FIG. 20). The Puzzle Captcha system places substantially equal weight on color, shape and pattern to create a puzzle that is simple to solve for a human but extremely difficult to solve for a computer. As a general overview, an incomplete jigsaw puzzle is randomly generated (see for example FIG. 22). The incomplete jigsaw puzzle has one or more pieces missing. Solution pieces are also provided (see for example FIGS. 23 & 24). These provided solution pieces include the piece missing from the provided incomplete jigsaw puzzle. Solving skills required to solve the puzzle include (i) visually matching provided puzzle pieces (solution pieces) with the missing piece that forms part of those pieces that have been assembled; (ii) choosing the correct piece requires the user to interpret shape and surface patterns. The jigsaw puzzle solution pieces may include identical pieces with the correct solution shape but only one piece will have the correct image section).
A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Aleksandrovich in view of Kubendran and “Cheng, Z., Gao, H., Liu, Z., Wu, H., Zi, Y. and Pei, G. (2019), Image-Based CAPTCHAs based on neural style transfer” by applying the well-known technique as disclosed by Isaacs’s method of completing the CAPTCHA puzzle by choosing the correct piece from the multiple pieces in order to complete the CAPTCHA puzzle. The motivation is to provide an improved method of generating a Completely Automated Public Test to tell Computers and Humans Apart (CAPTCHA), and more particularly to a method that can be used as an entry point to online websites or protected sections, pages or links of websites (Isaacs: [Col 2, lines20-25]).
Regarding Claim 7, Aleksandrovich in view of Kubendran and “Cheng, Z., Gao, H., Liu, Z., Wu, H., Zi, Y. and Pei, G. (2019), Image-Based CAPTCHAs based on neural style transfer”; and in further view of Conti and Isaacs teaches:
The system of claim 1 (see rejection of claim 1 above),
(“Cheng, Z., Gao, H., Liu, Z., Wu, H., Zi, Y. and Pei, G. (2019), Image-Based CAPTCHAs based on neural style transfer”: [Page 520, Col 2, section 3.1, line 3-6], provides for the reconstruction process includes two steps: style reconstruction and content reconstruction. For humans, the content of an image is the global structure, whereas the style of an image involves the color and local structures) with feed-forward passes (“Cheng, Z., Gao, H., Liu, Z., Wu, H., Zi, Y. and Pei, G. (2019), Image-Based CAPTCHAs based on neural style transfer”: [page 3, Col 1, section 3.2, 5-9], To accelerate the iterative optimisation process, Ulyanov et al. [34] and Johnson et al. [18] combined the benefits of feed-forward image transformation tasks and optimisation-based methods for fast neural style transfer tasks and achieved thousands of times).
It would be obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to modify Aleksandrovich’s in view of Kubendran by applying “Cheng, Z., Gao, H., Liu, Z., Wu, H., Zi, Y. and Pei, G. (2019), Image-Based CAPTCHAs based on neural style transfer” ’s method of reconstruction process of an image in order to transform an image. The motivation is to generate new and unique image.
Regarding Claim 10, Aleksandrovich in view of Kubendran and “Cheng, Z., Gao, H., Liu, Z., Wu, H., Zi, Y. and Pei, G. (2019), Image-Based CAPTCHAs based on neural style transfer” and in further view of Conti and Isaacs teaches:
The system of claim 1 (see rejection of claim 1 above),
wherein the CAPTCHA server includes a token generator, a CAPTCHA queue, an object datastore, a token validator, and a CAPTCHA validator (Aleksandrovich: [0076]: provide for as discussed below, the CAPTCHA server 34 then verifies, authenticates or matches the client solution 28 to a stored CAPTCHA challenge solution 26 before access is granted to the client),
(Conti: [Col 7. Lines 66-76 to col 8, lines 1-25], (35) The user stops the movement of the cursor when the user believes that the cursor is in the final position (cur.sub.x.sup.f, cur.sub.y.sup.f) where the user recognizes the distribution of image portions inside the test area to be the original image. So such method provides that when the client terminal detects the final position of the cursor, it transmits the coordinates of the final position of the cursor to the server terminal, which accepts the coordinates of the final position (cur.sub.x.sup.f, cur.sub.y.sup.f). Subsequently, the server terminal compares such coordinates with the coordinates of the solution position (sol.sub.x, sol.sub.y) through a script therein implemented. This comparison occurs by comparing the euclidean distance between the final position and the solution position, and a predetermined threshold of tolerance. If such difference is less than the tolerance threshold, the method considers that the interaction with the electronic terminal is accomplished by a human, and therefore the user has passed the test)).
A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Aleksandrovich in view of Kubendran and “Cheng, Z., Gao, H., Liu, Z., Wu, H., Zi, Y. and Pei, G. (2019), Image-Based CAPTCHAs based on neural style transfer” by applying the well-known technique as disclosed by Conti’s method of comparing and calculating the euclidean distance between the final position and the solution position, and therefore the user has passed the test. The motivation is to improve the ease of usability, level of security, the efficiency of a test for recognizing if the user of an electronic terminal is a human or a robot (Conti: [Col 2, lines 54-64]).
Regarding claim 11, this claim contains identical limitations found within that of claim 1
above albeit directed to a different statutory category (method medium). For this reason, the same grounds of rejection are applied to claim 11.
Regarding Claim 17, this claim contains identical limitations found within that of claim 7 above albeit directed to a different statutory category (method medium). For this reason, the same grounds of rejection are applied to claim 17.
Regarding Claim 20, Aleksandrovich in view of Kubendran and “Cheng, Z., Gao, H., Liu, Z., Wu, H., Zi, Y. and Pei, G. (2019), Image-Based CAPTCHAs based on neural style transfer” and in further view of Conti and Isaacs teaches:
The method of claim 11 (see rejection of claim 11 above),
generating a token (Aleksandrovich: [0078]: generates an MD5 signature (=a token) to protect all transferred data between the secured website, the client's web browser, and the CAPTCHA server 34);
queuing a plurality of CAPTCHAs, including the CAPTCHA (Aleksandrovich: [0074], the ability of automated systems to keep up with the ever-increasing number of CAPTCHAs (=queuing CAPTCHAS), given the wide variety of types of images that would be used, is limited. In creating the CAPTCHA type puzzle, a marketer or website owner would submit a copy of an image to the CAPTCHA system wherein the CAPTCHA system would automatically enter and upload the image into the database and then create the desired puzzle);
maintaining an object datastore that includes the CAPTCHA, a CAPTCHA answer, the token, and a CAPTCHA client UID stored in association with one another (Aleksandrovich: [0091] The CAPTCHA server 34 accepts the MD5 signature (=the token) and verifies that such signature was generated by the secured website server 32. [0096] The secured website server 32 then sends a request to the CAPTCHA server 34 and such request includes the unique identifier (=CAPTCHA client UID) 40 generated in step 135 of FIG. 23. In some instances, it may also include the client solution (=a CAPTCHA answer). In step 200 of FIG. 25 the CAPTCHA server 34 accepts the unique identifier 40 generated in step 135 of FIG. 23, and using this unique identifier 40, the CAPTCHA server 34 then locates the CAPTCHA verification result according to its unique identifier 40 in its internal database (the verification result is stored in the database);
validating the CAPTCHA (Aleksandrovich: [0091], provides for the CAPTCHA server 34 accepts the MD5 signature and verifies that such signature was generated by the secured website server 32. FIG.24 illustrates in steps 165 and 170 that the CAPTCHA server 34 may confirm that the client solution constituting a proposed solution to a given CAPTCHA challenge 24 submitted by the client is correct by comparing, verifying, matching or authenticating the client solution 28 submitted by the client against the stored CAPTCHA challenge solution 26. More specifically, the CAPTCHA solution stored on the CAPTCHA server 34 and associated with the unique identifier 40, as described above, are both compared, matched, verified, or authenticated against the client solution (=a CAPTCHA answer) and associated unique identifier 40 sent by the client device);
validating the CAPTCHA client UID by matching it with the CAPTCHA client UID stored in the object datastore (Aleksandrovich: [0091], provides for the CAPTCHA solution stored on the CAPTCHA server 34 and associated with the unique identifier 40, as described above, are both compared, matched, verified, or authenticated against the client solution and associated unique identifier (=UID) sent by the client device. [0096], provides for the secured website server 32 then sends a request to the CAPTCHA server 34 and such request includes the unique identifier 40 generated in step 135 of FIG. 23. In some instances, it may also include the client solution. In step 200 of FIG. 25 the CAPTCHA server 34 accepts the unique identifier (=UID) generated in step 135 of FIG. 23, and using this unique identifier 40, the CAPTCHA server 34 then locates the CAPTCHA verification result according to its unique identifier 40 in its internal database (the verification result is stored in the database)).
validating the token validating a received CAPTCHA answer by checking if(Aleksandrovich: [0091], provides for the CAPTCHA server 34 accepts the MD5 signature and verifies that such signature was generated by the secured website server 32. FIG.24 illustrates in steps 165 and 170 that the CAPTCHA server 34 may confirm that the client solution constituting a proposed solution to a given CAPTCHA challenge 24 submitted by the client is correct by comparing, verifying, matching or authenticating the client solution 28 submitted by the client against the stored CAPTCHA challenge solution 26) in the received CAPTCHA answer is equal to the CAPTCHA answer in the object datastore (Aleksandrovich: [0027], provides for the step of initiating a verification process includes the step of verifying movement of each graphical element of the CAPTCHA challenge from an initial position. [0029], provide for verifying with the client device movement of each graphical element of the CAPTCHA challenge from an initial position (position of graphical element from initial position to the expected position = distance traveled/covered by slider). [0065], provides for FIGS. 3, 10 and 12, the client has moved or manipulated all of the graphical elements 20 or puzzle pieces to the proper position (=equal to the CAPTCHA answer in the datastore/expected position) and perfectly assembled the puzzle such that a verification or submit control 38 may be pressed to check the correctness of the CAPTCHA assembly. [0091], provides for FIG.24 illustrates in steps 165 and 170 that the CAPTCHA server 34 may confirm that the client solution constituting a proposed solution to a given CAPTCHA challenge 24 submitted by the client is correct by comparing, verifying, matching or authenticating the client solution 28 submitted by the client against the stored CAPTCHA challenge solution 26). [0092] While the CAPTCHA server 34 could store the actual graphical solution, such as an image, on the CAPTCHA server 34, it typically saves the coordinates (of the moveable objects) when the CAPTCHA challenge 24 is being formed in the step 130 of FIG. 23)
(Conti: [Col 7. Lines 66-76 to col 8, lines 1-25], (35) The user stops the movement of the cursor when the user believes that the cursor is in the final position (cur.sub.x.sup.f, cur.sub.y.sup.f) where the user recognizes the distribution of image portions inside the test area to be the original image. So such method provides that when the client terminal detects the final position of the cursor, it transmits the coordinates of the final position of the cursor to the server terminal, which accepts the coordinates of the final position (cur.sub.x.sup.f, cur.sub.y.sup.f). Subsequently, the server terminal compares such coordinates with the coordinates of the solution position (sol.sub.x, sol.sub.y) through a script therein implemented. This comparison occurs by comparing the euclidean distance between the final position and the solution position, and a predetermined threshold of tolerance. If such difference is less than the tolerance threshold, the method considers that the interaction with the electronic terminal is accomplished by a human, and therefore the user has passed the test) ;
A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Aleksandrovich in view of Kubendran and “Cheng, Z., Gao, H., Liu, Z., Wu, H., Zi, Y. and Pei, G. (2019), Image-Based CAPTCHAs based on neural style transfer” by applying the well-known technique as disclosed by Conti’s method of comparing and calculating the euclidean distance between the final position and the solution position. The motivation is to improve the ease of usability, level of security, the efficiency of a test for recognizing if the user of an electronic terminal is a human or a robot (Conti: [Col 2, lines 54-64]).
Regarding claim 21, this claim contains identical limitations found within that of claim 1
above albeit directed to a different statutory category (system medium). For this reason, the same grounds of rejection are applied to claim 21.
Regarding Claim 22, Aleksandrovich in view of