Prosecution Insights
Last updated: April 19, 2026
Application No. 18/680,876

VISCAD: VISUAL-GUIDED CAMPAIGN AUTO-DISCOVERY

Non-Final OA §103§112
Filed
May 31, 2024
Examiner
HOLLISTER, JAMES ROSS
Art Unit
2499
Tech Center
2400 — Computer Networks
Assignee
Palo Alto Networks Inc.
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
162 granted / 215 resolved
+17.3% vs TC avg
Strong +26% interview lift
Without
With
+25.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
18 currently pending
Career history
233
Total Applications
across all art units

Statute-Specific Performance

§101
15.2%
-24.8% vs TC avg
§103
55.8%
+15.8% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 215 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Summary This action is a responsive to the application filed on 5/31/2024. Claims 1-21 are pending and have been examined. Claims 1-21 are rejected. Information Disclosure Statement The information disclosure statement (IDS) submitted on 8/5/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: 138, 146, 152, 156, 144, 805, 830, 855. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claims 1, 11, 12, 14, 16, 18, 20, 21 objected to because of the following informalities: Claims 1, 11, 12, 14, 16, 18, 20, 21 – These claims all use an abbreviation (URL) in the claim language without disclosing the meaning of the abbreviations within the scope of each claim set. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. The term “visual similarities” in claims 1, 20 and 21 is a relative term which renders the claim indefinite. The term “visual similarities” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-7, 11, 13-14 and 16-21 are rejected under 35 U.S.C. 103 as being unpatentable over Jones et al. (US 20230188566 A1) and further in view of Liu et al. (US 20220038424 A1). As to claim 1, Jones et al. teaches A system, comprising: one or more processors (See ¶ [0044], Teaches that memory(s) 115 may store and/or otherwise provide a plurality of modules (which may, e.g., include instructions that may be executed by processor(s) 114 to cause visual comparison and classification platform 150 to perform various functions) and/or databases (which may, e.g., store data used by visual comparison and classification platform 150 in performing various functions).) configured to: group a plurality of images associated with a plurality of samples to obtain a set of image groups, wherein the plurality of images are grouped based at least in part on visual similarities (See ¶¶ [0068]-[0073], Teaches that the visual comparison and classification platform 150 may receive or otherwise access the image data corresponding to the URL from the URL classification platform 110. The visual comparison and classification platform 150 may compute a computer vision vector representation of the image data received at step 212. In one or more instances, in computing the computer vision vector representation of the image data, the visual comparison and classification platform 150 may pass the image data through one or more layers of a convolutional neural network (e.g., including a representation layer). The visual comparison and classification platform 150 may compare the computer vision vector representation of the image data to one or more stored numeric vectors representing page elements. In some instances, prior to comparing the computer vision vector representation of the image data to the one or more stored numeric vectors representing page elements, the visual comparison and classification platform 150 may use a hash table lookup function to determine whether an exact match exists between the image data and a specific page element (e.g., without using the computer vision vector representation of the image data or the one or more stored numeric vectors representing page elements). In doing so, the visual comparison and classification platform 150 may perform this relatively quick matching function prior to performing more computationally intensive and/or inexact matching (e.g., using a nearest neighbor search, radius search, or the like), comparing, or the like (e.g., if an exact match is identified, the visual comparison and classification platform 150 does not need to move to the more computationally intensive matching) and thus optimize computing resource consumption, thereby providing one or more technical advantages.); determine one or more patterns from URLs for samples associated with images comprised in a particular image group (See ¶ [0075], Teaches that the visual comparison and classification platform 150 may input, into a classifier, the feature indicating whether and/or to what extent the computer vision vector representation of the image data is visually similar to the known page element. For example, the visual comparison and classification platform 150 may input this feature into a machine learning classifier, rule based classifier, or the like. The classifier may, for instance, process the feature indicating whether and/or to what extent the computer vision vector representation of the image data is visually similar to the known page element in combination with one or more other features by applying one or more machine learning models and may output a numerical classification score, as illustrated below. It should be understood that the classifier is not limited to using visual similarity as an input and may utilize other features and/or evidence to compute the numerical classification score.); and a memory coupled to the one or more processors and configured to provide one or more processors with instructions (See ¶ [0044], Teaches that memory(s) 115 may store and/or otherwise provide a plurality of modules (which may, e.g., include instructions that may be executed by processor(s) 114 to cause visual comparison and classification platform 150 to perform various functions) and/or databases (which may, e.g., store data used by visual comparison and classification platform 150 in performing various functions).). However, it does not expressly teach the details of generate a signature for each of the determined one or more patterns from the URLs. Liu et al., from analogous art, teaches generate a signature for each of the determined one or more patterns from the URLs (See ¶ [0036], Teaches that the pattern evaluator 205 determines if the pattern 212 is malicious. The determination of whether the pattern 212 is malicious is at least partly based on whether the pattern 212 matches a pattern previously generated from a malicious URL(s) that has been stored in the malicious pattern repository 117 and does not match a pattern previously generated from benign URLs that has been stored in the benign pattern repository 121. The pattern evaluator 205 submits a query indicating the pattern 212 to the malicious pattern repository 117 to determine whether the pattern 212 matches a pattern previously generated from a known malicious URL. In this example, the pattern evaluator 205 determines from the results of the submitted query that the pattern 212 matches a pattern 217 which was previously generated from a known malicious URL(s). Before a verdict is made for the pattern 212, the pattern evaluator 205 can also implement an additional verification to determine if the pattern 212 should not be indicated as malicious based on results of querying a whitelist 215 (e.g., an Internet Protocol (IP) whitelist) and the benign pattern repository 121. In this example, the pattern evaluator 205 queries the benign pattern repository 121 and a whitelist 215 to determine whether the pattern 212 should not be indicated as malicious based on association of the request 202 with a benign URL, IP address, etc. The whitelist 215 and benign pattern repository 121 can be queried before reporting a verdict for a pattern to prevent false detection of malicious URLs. In this example, the pattern evaluator 205 determines that results of the queries submitted to the whitelist 215 and the benign pattern repository 121 do not indicate that the pattern 212 corresponds to a benign URL.). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Liu et al. into Jones et al. in order to detect and/or block access to malicious URLs include deployment of a web crawler to identify malicious URLs, analyzing network traffic for malicious content, implementation of URL filtering policies (e.g., at a firewall), and establishing and maintaining a blacklist of URLs known to be malicious (See Liu et al. ¶ [0002]). As to claim 2, the combination of Jones et al. and Liu et al. teaches the system according to claim 1 above. Jones et al. further teaches wherein each image in the plurality of images is an image of website content (See ¶ [0068], Teaches that the visual comparison and classification platform 150 may receive or otherwise access the image data corresponding to the URL from the URL classification platform 110. In some instances, the visual comparison and classification platform 150 may receive the URL without the image data, and may access the URL (e.g., a page corresponding to the URL) to collect the image data, receive the image data from the cybersecurity server 160, or otherwise access the image data. In some instances, in receiving the image data, the visual comparison and classification platform 150 may receive image data of a graphical rendering of a resource available at the URL.). As to claim 3, the combination of Jones et al. and Liu et al. teaches the system according to claim 1 above. Jones et al. further teaches wherein the plurality of images is grouped based at least in part on performing a hashing of each image (See ¶ [0070], Teaches that prior to comparing the computer vision vector representation of the image data to the one or more stored numeric vectors representing page elements, the visual comparison and classification platform 150 may use a hash table lookup function to determine whether an exact match exists between the image data and a specific page element (e.g., without using the computer vision vector representation of the image data or the one or more stored numeric vectors representing page elements). In doing so, the visual comparison and classification platform 150 may perform this relatively quick matching function prior to performing more computationally intensive and/or inexact matching (e.g., using a nearest neighbor search, radius search, or the like), comparing, or the like (e.g., if an exact match is identified, the visual comparison and classification platform 150 does not need to move to the more computationally intensive matching) and thus optimize computing resource consumption, thereby providing one or more technical advantages.). As to claim 4, the combination of Jones et al. and Liu et al. teaches the system according to claim 3 above. Jones et al. further teaches wherein the performing the hashing of each image comprises: obtaining a plurality of hashes based on performing a perceptual image hashing with respect to each image (See ¶ [0070], Teaches that prior to comparing the computer vision vector representation of the image data to the one or more stored numeric vectors representing page elements, the visual comparison and classification platform 150 may use a hash table lookup function to determine whether an exact match exists between the image data and a specific page element (e.g., without using the computer vision vector representation of the image data or the one or more stored numeric vectors representing page elements). In doing so, the visual comparison and classification platform 150 may perform this relatively quick matching function prior to performing more computationally intensive and/or inexact matching (e.g., using a nearest neighbor search, radius search, or the like), comparing, or the like (e.g., if an exact match is identified, the visual comparison and classification platform 150 does not need to move to the more computationally intensive matching) and thus optimize computing resource consumption, thereby providing one or more technical advantages.). As to claim 5, the combination of Jones et al. and Liu et al. teaches the system according to claim 4 above. Jones et al. further teaches wherein grouping the plurality of images comprises: determining a first grouping of the plurality of images based at least in part on the plurality of hashes (See ¶ [0070], Teaches that prior to comparing the computer vision vector representation of the image data to the one or more stored numeric vectors representing page elements, the visual comparison and classification platform 150 may use a hash table lookup function to determine whether an exact match exists between the image data and a specific page element (e.g., without using the computer vision vector representation of the image data or the one or more stored numeric vectors representing page elements). In doing so, the visual comparison and classification platform 150 may perform this relatively quick matching function prior to performing more computationally intensive and/or inexact matching (e.g., using a nearest neighbor search, radius search, or the like), comparing, or the like (e.g., if an exact match is identified, the visual comparison and classification platform 150 does not need to move to the more computationally intensive matching) and thus optimize computing resource consumption, thereby providing one or more technical advantages.). As to claim 6, the combination of Jones et al. and Liu et al. teaches the system according to claim 5 above. Jones et al. further teaches wherein the grouping the plurality of images comprises: refining the first grouping of the plurality of images to obtain the set of image groups (See ¶¶ [0070], [0079], Teaches that prior to comparing the computer vision vector representation of the image data to the one or more stored numeric vectors representing page elements, the visual comparison and classification platform 150 may use a hash table lookup function to determine whether an exact match exists between the image data and a specific page element (e.g., without using the computer vision vector representation of the image data or the one or more stored numeric vectors representing page elements). In doing so, the visual comparison and classification platform 150 may perform this relatively quick matching function prior to performing more computationally intensive and/or inexact matching (e.g., using a nearest neighbor search, radius search, or the like), comparing, or the like (e.g., if an exact match is identified, the visual comparison and classification platform 150 does not need to move to the more computationally intensive matching) and thus optimize computing resource consumption, thereby providing one or more technical advantages. The visual comparison and classification platform 150 may compare the image data captured from the URL to the image data captured from the one or more ancestor pages by performing a visual comparison (e.g., a color analysis, a deep learning vector comparison, a logo comparison, optical character comparison, or the like) between the image data captured from the URL and the image data captured from the one or more ancestor pages. Additionally or alternatively, the visual comparison and classification platform 150 may perform a non-visual comparison of the URL and its ancestor page(s), such as a comparison of code, markup, text, or the like captured from the URL and code, markup, text, or the like captured from the one or more ancestor pages.). As to claim 7, the combination of Jones et al. and Liu et al. teaches the system according to claim 6 above. Jones et al. further teaches wherein the refining the first grouping of the plurality of image comprises: encoding the plurality of images based at least in part on a predetermined deep learning model (See ¶¶ [0070], [0079], Teaches that prior to comparing the computer vision vector representation of the image data to the one or more stored numeric vectors representing page elements, the visual comparison and classification platform 150 may use a hash table lookup function to determine whether an exact match exists between the image data and a specific page element (e.g., without using the computer vision vector representation of the image data or the one or more stored numeric vectors representing page elements). In doing so, the visual comparison and classification platform 150 may perform this relatively quick matching function prior to performing more computationally intensive and/or inexact matching (e.g., using a nearest neighbor search, radius search, or the like), comparing, or the like (e.g., if an exact match is identified, the visual comparison and classification platform 150 does not need to move to the more computationally intensive matching) and thus optimize computing resource consumption, thereby providing one or more technical advantages. The visual comparison and classification platform 150 may compare the image data captured from the URL to the image data captured from the one or more ancestor pages by performing a visual comparison (e.g., a color analysis, a deep learning vector comparison, a logo comparison, optical character comparison, or the like) between the image data captured from the URL and the image data captured from the one or more ancestor pages. Additionally or alternatively, the visual comparison and classification platform 150 may perform a non-visual comparison of the URL and its ancestor page(s), such as a comparison of code, markup, text, or the like captured from the URL and code, markup, text, or the like captured from the one or more ancestor pages.). As to claim 11, the combination of Jones et al. and Liu et al. teaches the system according to claim 1 above. Jones et al. further teaches wherein the one or more patterns from the URLS for samples are determined based at least in part on one or more heuristics (See ¶¶ [0054]-[0056], Teaches that In some instances, in identifying the one or more human-engineered features of the URL, the URL classification platform 110 may identify one or more instances of brand mimicry. For example, in identifying the one or more instances of brand mimicry, the URL classification platform 110 may perform a string match to identify inclusion of brand names and key words in the URL. Additionally or alternatively, in identifying the one or more instances of brand mimicry, the URL classification platform 110 may identify an edit distance between the URL and other brand strings (and this edit distance may, e.g., correspond to how many modifications such as additions, deletions, replacements, or the like would need be made to make the URL string to make the URL string match or include the other brand strings). Additionally or alternatively, in identifying the one or more instances of brand mimicry, the URL classification platform 110 may identify phonetic distances between the URL and brand names (and such a phonetic distance may, e.g., represent whether and/or to what extent the URL string contain words or phrases that sounds like a brand). Additionally or alternatively, in identifying the one or more instances of brand mimicry, the URL classification platform 110 may perform visual processing using a computer vision system to identify whether and/or to what extent the URL string includes text that looks like a brand name, even if the characters do not match exactly (and this may, e.g., include looking for instances in which certain characters are swapped to appear visually similar to other characters, such as “cl” for “d,” “1” for “l,” or the like).). As to claim 13, the combination of Jones et al. and Liu et al. teaches the system according to claim 1 above. Jones et al. further teaches wherein the one or more processors are further configured to: determine one or more patterns from HTMLs for samples associated with the images comprised in a particular image group (See ¶ [0075], Teaches that the visual comparison and classification platform 150 may input, into a classifier, the feature indicating whether and/or to what extent the computer vision vector representation of the image data is visually similar to the known page element. For example, the visual comparison and classification platform 150 may input this feature into a machine learning classifier, rule based classifier, or the like. The classifier may, for instance, process the feature indicating whether and/or to what extent the computer vision vector representation of the image data is visually similar to the known page element in combination with one or more other features by applying one or more machine learning models and may output a numerical classification score, as illustrated below. It should be understood that the classifier is not limited to using visual similarity as an input and may utilize other features and/or evidence to compute the numerical classification score.). As to claim 14, the combination of Jones et al. and Liu et al. teaches the system according to claim 1 above. However, it does not expressly teach the details of wherein the one or more processors are further configured to: obtain a new sample; determine a signature for the new sample; and classify a new sample based at least in part on the signature for the new sample and signatures for the one or more patterns from the URLs. Liu et al., from analogous art, teaches wherein the one or more processors are further configured to: obtain a new sample; determine a signature for the new sample; and classify a new sample based at least in part on the signature for the new sample and signatures for the one or more patterns from the URLs (See ¶ [0036], Teaches that the pattern evaluator 205 determines if the pattern 212 is malicious. The determination of whether the pattern 212 is malicious is at least partly based on whether the pattern 212 matches a pattern previously generated from a malicious URL(s) that has been stored in the malicious pattern repository 117 and does not match a pattern previously generated from benign URLs that has been stored in the benign pattern repository 121. The pattern evaluator 205 submits a query indicating the pattern 212 to the malicious pattern repository 117 to determine whether the pattern 212 matches a pattern previously generated from a known malicious URL. In this example, the pattern evaluator 205 determines from the results of the submitted query that the pattern 212 matches a pattern 217 which was previously generated from a known malicious URL(s). Before a verdict is made for the pattern 212, the pattern evaluator 205 can also implement an additional verification to determine if the pattern 212 should not be indicated as malicious based on results of querying a whitelist 215 (e.g., an Internet Protocol (IP) whitelist) and the benign pattern repository 121. In this example, the pattern evaluator 205 queries the benign pattern repository 121 and a whitelist 215 to determine whether the pattern 212 should not be indicated as malicious based on association of the request 202 with a benign URL, IP address, etc. The whitelist 215 and benign pattern repository 121 can be queried before reporting a verdict for a pattern to prevent false detection of malicious URLs. In this example, the pattern evaluator 205 determines that results of the queries submitted to the whitelist 215 and the benign pattern repository 121 do not indicate that the pattern 212 corresponds to a benign URL). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Liu et al. into the combination of Jones et al. and Liu et al. in order to detect and/or block access to malicious URLs include deployment of a web crawler to identify malicious URLs, analyzing network traffic for malicious content, implementation of URL filtering policies (e.g., at a firewall), and establishing and maintaining a blacklist of URLs known to be malicious (See Liu et al. ¶ [0002]). As to claim 16, the combination of Jones et al. and Liu et al. teaches the system according to claim 1 above. However, it does not expressly teach the details of wherein the one or more processors are further configured to: classify the signature for a particular pattern from the URLs as benign or malicious based at least in part on historical information. Liu et al., from analogous art, teaches wherein the one or more processors are further configured to: classify the signature for a particular pattern from the URLs as benign or malicious based at least in part on historical information (See ¶¶ [0036], [0045], Teaches that the pattern evaluator 205 determines if the pattern 212 is malicious. The determination of whether the pattern 212 is malicious is at least partly based on whether the pattern 212 matches a pattern previously generated from a malicious URL(s) that has been stored in the malicious pattern repository 117 and does not match a pattern previously generated from benign URLs that has been stored in the benign pattern repository 121. The pattern evaluator 205 submits a query indicating the pattern 212 to the malicious pattern repository 117 to determine whether the pattern 212 matches a pattern previously generated from a known malicious URL. In this example, the pattern evaluator 205 determines from the results of the submitted query that the pattern 212 matches a pattern 217 which was previously generated from a known malicious URL(s). Before a verdict is made for the pattern 212, the pattern evaluator 205 can also implement an additional verification to determine if the pattern 212 should not be indicated as malicious based on results of querying a whitelist 215 (e.g., an Internet Protocol (IP) whitelist) and the benign pattern repository 121. In this example, the pattern evaluator 205 queries the benign pattern repository 121 and a whitelist 215 to determine whether the pattern 212 should not be indicated as malicious based on association of the request 202 with a benign URL, IP address, etc. The whitelist 215 and benign pattern repository 121 can be queried before reporting a verdict for a pattern to prevent false detection of malicious URLs. In this example, the pattern evaluator 205 determines that results of the queries submitted to the whitelist 215 and the benign pattern repository 121 do not indicate that the pattern 212 corresponds to a benign URL. The pattern generator can be invoked by components executing in a training environment to build a repository of malicious URL patterns and by components executing in a deployment environment for pattern-based detection of malicious URLs. For the former, the pattern generator may retrieve a labeled URL from a URL repository (e.g., via an API published by the URL repository). In the deployment environment, the pattern generator can detect a URL indicated in an incoming request for a resource (e.g., in a header of an HTTP request). Pattern generation is subsequently performed for the URL which was either obtained from a URL repository or detected in a request.). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Liu et al. into the combination of Jones et al. and Liu et al. in order to detect and/or block access to malicious URLs include deployment of a web crawler to identify malicious URLs, analyzing network traffic for malicious content, implementation of URL filtering policies (e.g., at a firewall), and establishing and maintaining a blacklist of URLs known to be malicious (See Liu et al. ¶ [0002]). As to claim 17, the combination of Jones et al. and Liu et al. teaches the system according to claim 1 above. However, it does not expressly teach the details of wherein classification of the signature for a particular pattern is used to train a machine learning model configured to detect malicious samples. Liu et al., from analogous art, teaches wherein classification of the signature for a particular pattern is used to train a machine learning model configured to detect malicious samples (See ¶¶ [0036], [0045], Teaches that the pattern evaluator 205 determines if the pattern 212 is malicious. The determination of whether the pattern 212 is malicious is at least partly based on whether the pattern 212 matches a pattern previously generated from a malicious URL(s) that has been stored in the malicious pattern repository 117 and does not match a pattern previously generated from benign URLs that has been stored in the benign pattern repository 121. The pattern evaluator 205 submits a query indicating the pattern 212 to the malicious pattern repository 117 to determine whether the pattern 212 matches a pattern previously generated from a known malicious URL. In this example, the pattern evaluator 205 determines from the results of the submitted query that the pattern 212 matches a pattern 217 which was previously generated from a known malicious URL(s). Before a verdict is made for the pattern 212, the pattern evaluator 205 can also implement an additional verification to determine if the pattern 212 should not be indicated as malicious based on results of querying a whitelist 215 (e.g., an Internet Protocol (IP) whitelist) and the benign pattern repository 121. In this example, the pattern evaluator 205 queries the benign pattern repository 121 and a whitelist 215 to determine whether the pattern 212 should not be indicated as malicious based on association of the request 202 with a benign URL, IP address, etc. The whitelist 215 and benign pattern repository 121 can be queried before reporting a verdict for a pattern to prevent false detection of malicious URLs. In this example, the pattern evaluator 205 determines that results of the queries submitted to the whitelist 215 and the benign pattern repository 121 do not indicate that the pattern 212 corresponds to a benign URL. The pattern generator can be invoked by components executing in a training environment to build a repository of malicious URL patterns and by components executing in a deployment environment for pattern-based detection of malicious URLs. For the former, the pattern generator may retrieve a labeled URL from a URL repository (e.g., via an API published by the URL repository). In the deployment environment, the pattern generator can detect a URL indicated in an incoming request for a resource (e.g., in a header of an HTTP request). Pattern generation is subsequently performed for the URL which was either obtained from a URL repository or detected in a request). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Liu et al. into the combination of Jones et al. and Liu et al. in order to detect and/or block access to malicious URLs include deployment of a web crawler to identify malicious URLs, analyzing network traffic for malicious content, implementation of URL filtering policies (e.g., at a firewall), and establishing and maintaining a blacklist of URLs known to be malicious (See Liu et al. ¶ [0002]). As to claim 18, the combination of Jones et al. and Liu et al. teaches the system according to claim 1 above. However, it does not expressly teach the details of wherein the signature for a particular pattern is used to cover unclassified URLs to increase detection coverage or reduce false positive maliciousness classifications. Liu et al., from analogous art, teaches wherein the signature for a particular pattern is used to cover unclassified URLs to increase detection coverage or reduce false positive maliciousness classifications (See ¶ [0036], Teaches that the pattern evaluator 205 can also implement an additional verification to determine if the pattern 212 should not be indicated as malicious based on results of querying a whitelist 215 (e.g., an Internet Protocol (IP) whitelist) and the benign pattern repository 121. In this example, the pattern evaluator 205 queries the benign pattern repository 121 and a whitelist 215 to determine whether the pattern 212 should not be indicated as malicious based on association of the request 202 with a benign URL, IP address, etc. The whitelist 215 and benign pattern repository 121 can be queried before reporting a verdict for a pattern to prevent false detection of malicious URLs. In this example, the pattern evaluator 205 determines that results of the queries submitted to the whitelist 215 and the benign pattern repository 121 do not indicate that the pattern 212 corresponds to a benign URL.). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Liu et al. into the combination of Jones et al. and Liu et al. in order to detect and/or block access to malicious URLs include deployment of a web crawler to identify malicious URLs, analyzing network traffic for malicious content, implementation of URL filtering policies (e.g., at a firewall), and establishing and maintaining a blacklist of URLs known to be malicious (See Liu et al. ¶ [0002]). As to claim 19, the combination of Jones et al. and Liu et al. teaches the system according to claim 1 above. Jones et al. further teaches wherein the plurality of samples are obtained from a database of log data (See ¶ [0017], Teaches that he computing platform may update the screenshot database by: 1) identifying that a page image corresponding to a URL of the plurality of URLs has changed (e.g., where a previous page image corresponding to the URL of the plurality of URLs is stored in the screenshot database), 2) in response to determining that the page image corresponding to a URL of the plurality of URLs has changed: a) capturing the page image corresponding to the URL of the plurality of URLs, resulting in a captured page image corresponding to the URL of the plurality of URLs, and 3) adding the captured page image corresponding to the URL of the plurality of URLs to the screenshot database.). As to claim 20, Jones et al. teaches method, comprising: grouping a plurality of images associated with a plurality of samples to obtain a set of image groups, wherein the plurality of images are grouped based at least in part on visual similarities (See ¶¶ [0068]-[0073], Teaches that the visual comparison and classification platform 150 may receive or otherwise access the image data corresponding to the URL from the URL classification platform 110. The visual comparison and classification platform 150 may compute a computer vision vector representation of the image data received at step 212. In one or more instances, in computing the computer vision vector representation of the image data, the visual comparison and classification platform 150 may pass the image data through one or more layers of a convolutional neural network (e.g., including a representation layer). The visual comparison and classification platform 150 may compare the computer vision vector representation of the image data to one or more stored numeric vectors representing page elements. In some instances, prior to comparing the computer vision vector representation of the image data to the one or more stored numeric vectors representing page elements, the visual comparison and classification platform 150 may use a hash table lookup function to determine whether an exact match exists between the image data and a specific page element (e.g., without using the computer vision vector representation of the image data or the one or more stored numeric vectors representing page elements). In doing so, the visual comparison and classification platform 150 may perform this relatively quick matching function prior to performing more computationally intensive and/or inexact matching (e.g., using a nearest neighbor search, radius search, or the like), comparing, or the like (e.g., if an exact match is identified, the visual comparison and classification platform 150 does not need to move to the more computationally intensive matching) and thus optimize computing resource consumption, thereby providing one or more technical advantages.); determining one or more patterns from URLs for samples associated with images comprised in a particular image group (See ¶ [0075], Teaches that the visual comparison and classification platform 150 may input, into a classifier, the feature indicating whether and/or to what extent the computer vision vector representation of the image data is visually similar to the known page element. For example, the visual comparison and classification platform 150 may input this feature into a machine learning classifier, rule based classifier, or the like. The classifier may, for instance, process the feature indicating whether and/or to what extent the computer vision vector representation of the image data is visually similar to the known page element in combination with one or more other features by applying one or more machine learning models and may output a numerical classification score, as illustrated below. It should be understood that the classifier is not limited to using visual similarity as an input and may utilize other features and/or evidence to compute the numerical classification score.). However, it does not expressly teach the details of generating a signature for each of the determined one or more patterns form the URLs. Liu et al., from analogous art, teaches generating a signature for each of the determined one or more patterns form the URLs (See ¶ [0036], Teaches that the pattern evaluator 205 determines if the pattern 212 is malicious. The determination of whether the pattern 212 is malicious is at least partly based on whether the pattern 212 matches a pattern previously generated from a malicious URL(s) that has been stored in the malicious pattern repository 117 and does not match a pattern previously generated from benign URLs that has been stored in the benign pattern repository 121. The pattern evaluator 205 submits a query indicating the pattern 212 to the malicious pattern repository 117 to determine whether the pattern 212 matches a pattern previously generated from a known malicious URL. In this example, the pattern evaluator 205 determines from the results of the submitted query that the pattern 212 matches a pattern 217 which was previously generated from a known malicious URL(s). Before a verdict is made for the pattern 212, the pattern evaluator 205 can also implement an additional verification to determine if the pattern 212 should not be indicated as malicious based on results of querying a whitelist 215 (e.g., an Internet Protocol (IP) whitelist) and the benign pattern repository 121. In this example, the pattern evaluator 205 queries the benign pattern repository 121 and a whitelist 215 to determine whether the pattern 212 should not be indicated as malicious based on association of the request 202 with a benign URL, IP address, etc. The whitelist 215 and benign pattern repository 121 can be queried before reporting a verdict for a pattern to prevent false detection of malicious URLs. In this example, the pattern evaluator 205 determines that results of the queries submitted to the whitelist 215 and the benign pattern repository 121 do not indicate that the pattern 212 corresponds to a benign URL.). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Liu et al. into Jones et al. in order to detect and/or block access to malicious URLs include deployment of a web crawler to identify malicious URLs, analyzing network traffic for malicious content, implementation of URL filtering policies (e.g., at a firewall), and establishing and maintaining a blacklist of URLs known to be malicious (See Liu et al. ¶ [0002]). As to claim 21, Jones et al. teaches A computer program product embodied in a non-transitory computer readable medium and comprising computer instructions for: grouping a plurality of images associated with a plurality of samples to obtain a set of image groups, wherein the plurality of images are grouped based at least in part on visual similarities (See ¶¶ [0068]-[0073], Teaches that the visual comparison and classification platform 150 may receive or otherwise access the image data corresponding to the URL from the URL classification platform 110. The visual comparison and classification platform 150 may compute a computer vision vector representation of the image data received at step 212. In one or more instances, in computing the computer vision vector representation of the image data, the visual comparison and classification platform 150 may pass the image data through one or more layers of a convolutional neural network (e.g., including a representation layer). The visual comparison and classification platform 150 may compare the computer vision vector representation of the image data to one or more stored numeric vectors representing page elements. In some instances, prior to comparing the computer vision vector representation of the image data to the one or more stored numeric vectors representing page elements, the visual comparison and classification platform 150 may use a hash table lookup function to determine whether an exact match exists between the image data and a specific page element (e.g., without using the computer vision vector representation of the image data or the one or more stored numeric vectors representing page elements). In doing so, the visual comparison and classification platform 150 may perform this relatively quick matching function prior to performing more computationally intensive and/or inexact matching (e.g., using a nearest neighbor search, radius search, or the like), comparing, or the like (e.g., if an exact match is identified, the visual comparison and classification platform 150 does not need to move to the more computationally intensive matching) and thus optimize computing resource consumption, thereby providing one or more technical advantages.); determining one or more patterns from URLs for samples associated with images comprised in a particular image group (See ¶ [0075], Teaches that the visual comparison and classification platform 150 may input, into a classifier, the feature indicating whether and/or to what extent the computer vision vector representation of the image data is visually similar to the known page element. For example, the visual comparison and classification platform 150 may input this feature into a machine learning classifier, rule based classifier, or the like. The classifier may, for instance, process the feature indicating whether and/or to what extent the computer vision vector representation of the image data is visually similar to the known page element in combination with one or more other features by applying one or more machine learning models and may output a numerical classification score, as illustrated below. It should be understood that the classifier is not limited to using visual similarity as an input and may utilize other features and/or evidence to compute the numerical classification score.). However, it does not expressly teach the details of generating a signature for each of the determined one or more patterns form the URLs. Liu et al., from analogous art, teaches generating a signature for each of the determined one or more patterns form the URLs (See ¶ [0036], Teaches that the pattern evaluator 205 determines if the pattern 212 is malicious. The determination of whether the pattern 212 is malicious is at least partly based on whether the pattern 212 matches a pattern previously generated from a malicious URL(s) that has been stored in the malicious pattern repository 117 and does not match a pattern previously generated from benign URLs that has been stored in the benign pattern repository 121. The pattern evaluator 205 submits a query indicating the pattern 212 to the malicious pattern repository 117 to determine whether the pattern 212 matches a pattern previously generated from a known malicious URL. In this example, the pattern evaluator 205 determines from the results of the submitted query that the pattern 212 matches a pattern 217 which was previously generated from a known malicious URL(s). Before a verdict is made for the pattern 212, the pattern evaluator 205 can also implement an additional verification to determine if the pattern 212 should not be indicated as malicious based on results of querying a whitelist 215 (e.g., an Internet Protocol (IP) whitelist) and the benign pattern repository 121. In this example, the pattern evaluator 205 queries the benign pattern repository 121 and a whitelist 215 to determine whether the pattern 212 should not be indicated as malicious based on association of the request 202 with a benign URL, IP address, etc. The whitelist 215 and benign pattern repository 121 can be queried before reporting a verdict for a pattern to prevent false detection of malicious URLs. In this example, the pattern evaluator 205 determines that results of the queries submitted to the whitelist 215 and the benign pattern repository 121 do not indicate that the pattern 212 corresponds to a benign URL.). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Liu et al. into Jones et al. in order to detect and/or block access to malicious URLs include deployment of a web crawler to identify malicious URLs, analyzing network traffic for malicious content, implementation of URL filtering policies (e.g., at a firewall), and establishing and maintaining a blacklist of URLs known to be malicious (See Liu et al. ¶ [0002]). Claims 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Jones et al. (US 20230188566 A1) and Liu et al. (US 20220038424 A1) and further in view of Danino et al. (US 20250323943 A1). As to claim 8, the combination of Jones et al. and Liu et al. teaches the system according to claim 7 above. However, it does not expressly teach the details of wherein the predetermined deep learning model is ResNet-50. Danino, from analogous art, teaches wherein the predetermined deep learning model is ResNet-50 (See ¶ [0154], Teaches that a deep learning-based approach may be used to generate embeddings or feature vectors from screenshots using pretrained convolutional neural networks (e.g., ResNet or EfficientNet), which are then binarized or clustered to produce perceptual hashes that capture higher-order visual semantics.). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Danino into the combination of Jones et al. and Liu et al. in order to t to generate a more reliable similarity score which can be helpful to determine the total risk score (See Danino ¶ [0003]). As to claim 9, the combination of Jones et al. and Liu et al. teaches the system according to claim 7 above. However, it does not expressly teach the details of wherein the refining the first grouping of the plurality of image further comprises: determining the set of image groups based at least in part on the encoding of the plurality of images. Danino, from analogous art, teaches wherein the refining the first grouping of the plurality of image further comprises: determining the set of image groups based at least in part on the encoding of the plurality of images (See ¶¶ [0154]-[0155], Teaches that once a screenshot is captured, a perceptual hash can be generated using open-source or proprietary image hashing libraries, such as ImageHash, pHash, dHash, or aHash, each of which employs different techniques to distill visual structure into a compact digital representation. These libraries can be configured to extract key visual features (such as layout geometry, edge gradients, color intensity patterns, or frequency domain components) while discarding irrelevant pixel-level variations. For example, in an implementation using pHash, the screenshot may be converted to grayscale, resized to a standardized dimension (e.g., 32×32 pixels), and transformed via the Discrete Cosine Transform (DCT), after which the most significant frequency coefficients are thresholded to produce a binary hash. In another embodiment, a deep learning-based approach may be used to generate embeddings or feature vectors from screenshots using pretrained convolutional neural networks (e.g., ResNet or EfficientNet), which are then binarized or clustered to produce perceptual hashes that capture higher-order visual semantics. To ensure consistent comparisons, the system may normalize all screenshots prior to hashing by standardizing resolution, cropping unnecessary browser chrome (e.g., address bars, scrollbars), converting to a common format (e.g., PNG), or aligning the page content to a consistent aspect ratio. In certain implementations, normalization may also include deskewing, removing watermarks, or masking dynamic regions (e.g., timestamps, user-specific data) to reduce false mismatches. These steps enable the system to focus on persistent structural and layout features of the webpage and allow perceptual hashes to remain robust across platform-specific rendering differences, screen sizes, and device types. Step 3: Similarity comparison. The system can compare the hash of the suspected phishing site against hashes from the legitimate repository. Additionally, the system can use a similarity metric (e.g., perceptual distance) to measure how closely two hashes match, wherein lower values can indicate greater similarity. Alternatively, the system can compare the similarity score.). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Danino into the combination of Jones et al. and Liu et al. in order to t to generate a more reliable similarity score which can be helpful to determine the total risk score (See Danino ¶ [0003]). As to claim 10, the combination of Jones et al. and Liu et al. and Danino teaches the system according to claim 9 above. However, it does not expressly teach the details of wherein the determining the set of image groups based at least in part on the encoding of the plurality of images comprises: merging a plurality of groups from the first grouping based at least in part on a similarity among the plurality of groups. Danino, from analogous art, teaches wherein the determining the set of image groups based at least in part on the encoding of the plurality of images comprises: merging a plurality of groups from the first grouping based at least in part on a similarity among the plurality of groups (See ¶¶ [0154]-[0155], Teaches that once a screenshot is captured, a perceptual hash can be generated using open-source or proprietary image hashing libraries, such as ImageHash, pHash, dHash, or aHash, each of which employs different techniques to distill visual structure into a compact digital representation. These libraries can be configured to extract key visual features (such as layout geometry, edge gradients, color intensity patterns, or frequency domain components) while discarding irrelevant pixel-level variations. For example, in an implementation using pHash, the screenshot may be converted to grayscale, resized to a standardized dimension (e.g., 32×32 pixels), and transformed via the Discrete Cosine Transform (DCT), after which the most significant frequency coefficients are thresholded to produce a binary hash. In another embodiment, a deep learning-based approach may be used to generate embeddings or feature vectors from screenshots using pretrained convolutional neural networks (e.g., ResNet or EfficientNet), which are then binarized or clustered to produce perceptual hashes that capture higher-order visual semantics. To ensure consistent comparisons, the system may normalize all screenshots prior to hashing by standardizing resolution, cropping unnecessary browser chrome (e.g., address bars, scrollbars), converting to a common format (e.g., PNG), or aligning the page content to a consistent aspect ratio. In certain implementations, normalization may also include deskewing, removing watermarks, or masking dynamic regions (e.g., timestamps, user-specific data) to reduce false mismatches. These steps enable the system to focus on persistent structural and layout features of the webpage and allow perceptual hashes to remain robust across platform-specific rendering differences, screen sizes, and device types. Step 3: Similarity comparison. The system can compare the hash of the suspected phishing site against hashes from the legitimate repository. Additionally, the system can use a similarity metric (e.g., perceptual distance) to measure how closely two hashes match, wherein lower values can indicate greater similarity. Alternatively, the system can compare the similarity score.). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Danino into the combination of Jones et al. and Liu et al. and Danino in order to t to generate a more reliable similarity score which can be helpful to determine the total risk score (See Danino ¶ [0003]). Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Jones et al. (US 20230188566 A1) and Liu et al. (US 20220038424 A1) and further in view of Kutt et al. (US 20220046057 A1). As to claim 12, the combination of Jones et al. and Liu et al. teaches the system according to claim 1 above. However, it does not expressly teach the details of wherein the one or more patterns from the URLS for samples are determined based at least in part on performing a deep learning clustering. Kutt et al., from analogous art, teaches wherein the one or more patterns from the URLS for samples are determined based at least in part on performing a deep learning clustering (See ¶ [0193], Teaches that Since all benign data is assigned the off-target label, we first grouped 54 all malicious samples together. For multiclass models, we grouped malware families together and clustered each one individually. For binary models, we clustered all malware samples at the same time. We further isolated only the token sequence representations, {right arrow over (x)}t of each malicious samples and discarded the char sequence representations, {right arrow over (x)}c. We vectorized each sequence of token indices by computing the Term Frequency Inverse Document Frequency (TF-IDF) (see, e.g., C. Sammut and G. I. Webb, editors, TF-IDF, pages 986-987, Springer US, Boston, Mass., 2010, ISBN 978-0-387-30164-8, doi:10.1007/978-0-387-30164-8_832, available at https://doi.org/10.1007/978-0-387-30164-8_832) vectors over the token vocabulary, vt. We performed K-means++ on these TF-IDF vector representations of each malicious sample with K=ρ for binary models and K=1 for multiclass models. We used Euclidean distance for clustering. We then computed the Euclidean distance of the resulting cluster center(s) to all malware samples in the group. The training malware sample which was closest to a cluster center was chosen as a prototype initialization. After initialization of all the embedded vectors, each prototype, (Pc, Pt), was initialized as the corresponding (X{circumflex over ( )}c;X{circumflex over ( )}t) from the chosen malware samples. Each (Pc, Pt) was perturbed slightly with Gaussian noise to avoid potential overfitting.). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Kutt et al. into the combination of Jones et al. and Liu et al. in order to help detect and mitigate malware (See Kutt et al. ¶ [0003]). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Jones et al. (US 20230188566 A1) and Liu et al. (US 20220038424 A1) and further in view of Korsunsky et al. (US 20110231564 A1). As to claim 15, the combination of Jones et al. and Liu et al. teaches the system according to claim 1 above. However, it does not expressly teach the details of wherein the one or more patterns comprises one or more regexes. Korsunsky et al., from analogous art, teaches wherein the one or more patterns comprises one or more regexes (See ¶ [0190], Teaches that Embodiments of the content search logic 312 may encompass hardware-based regular expression logic while performing a search for position dependent substrings. To this end, a regular expression may first be partitioned into a set of position dependent substrings. A pattern tree may then be constructed which represents and enacts the search for substrings. When a substring is found, the relative positions of the substrings may be examined and, depending upon the result of the examination, a positive or negative match may be effectively determined. The logic may include the capability of detecting character classes (such as /[abc]/) and wildcards (such as * and .) which may be included in the regular expression. The logic may be capable of matching the beginning as well as the end of a string. Additionally or alternatively, the hardware-based regular expression logic can match alternation (such as /cat|dog/—“match ‘cat’ or ‘dog’”). In an embodiment, all possible matches resulting from an alternation may be built into the pattern tree. In another embodiment, the character class detector may be employed to match alternation. Alternately or additionally, the hardware-based regular expression logic may be able to match repetitive patterns (e.g. patterns repeated using quantifiers such as /a{3}/—“match ‘aaa’”). In an embodiment, the repetition may be unwound and the resulting patterns may be inserted into the pattern tree. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Korsunsky et al. into the combination of Jones et al. and Liu et al. in order to address patterns relevant to a variety of types of threats that relate to computer systems, including computer networks (See Korsunsky et al. ¶ [0008]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Long et al. (US 10592667 B1) teaches An apparatus can include a processor that can extract, from an input binary file, an image data structure, and can scale the image data structure to a predetermined size, and/or modify the image data structure to represent a grayscale image. The processor can calculate a modified pixel value for each pixel in the image data structure, and can define a binary vector based on the modified pixel value for each pixel in the image data structure. The processor can also identify a set of nearest neighbor binary vectors for the binary vector based on a comparison between the binary vector and a set of reference binary vectors stored in a malware detection database. The processor can then determine a malware status of the input binary file based on the set of nearest neighbor binary vectors satisfying a similarity criterion associated with a known malware image from a known malware file Any inquiry concerning this communication or earlier communications from the examiner should be directed to James R Hollister whose telephone number is (571)270-3152. The examiner can normally be reached Mon - Fri 7:30 am - 4:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Philip Chea can be reached at (571) 272-3951. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. James Hollister /J.R.H./Examiner, Art Unit 2499 1/24/26 /PHILIP J CHEA/Supervisory Patent Examiner, Art Unit 2499
Read full office action

Prosecution Timeline

May 31, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602472
BLINDING COUNTERMEASURE TO SECURE MULTIPLICATION OPERATIONS AGAINST SIDE CHANNEL ATTACKS
2y 5m to grant Granted Apr 14, 2026
Patent 12603892
Global mapping to internal applications
2y 5m to grant Granted Apr 14, 2026
Patent 12598170
REVERSE AUTHENTICATOR OF VIRTUAL OBJECTS AND ENTITIES IN VIRTUAL REALITY COMPUTING ENVIRONMENTS
2y 5m to grant Granted Apr 07, 2026
Patent 12580940
SECURITY ASSESSMENT OF SERVICES BEING MIGRATED TO A CLOUD PLATFORM
2y 5m to grant Granted Mar 17, 2026
Patent 12563252
Low Latency Adaptive Bitrate Linear Video Delivery System
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+25.6%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 215 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month