Prosecution Insights
Last updated: April 19, 2026
Application No. 17/888,683

SYSTEMS AND METHODS FOR DETECTING INSIDER ATTACKS ON A COMMUNICATION NETWORK

Final Rejection §103
Filed
Aug 16, 2022
Examiner
KHADKA, AMIT
Art Unit
2432
Tech Center
2400 — Computer Networks
Assignee
Fortinet Inc.
OA Round
4 (Final)
20%
Grant Probability
At Risk
5-6
OA Rounds
3y 6m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 20% of cases
20%
Career Allow Rate
1 granted / 5 resolved
-38.0% vs TC avg
Minimal -20% lift
Without
With
+-20.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
22 currently pending
Career history
27
Total Applications
across all art units

Statute-Specific Performance

§101
5.7%
-34.3% vs TC avg
§103
69.9%
+29.9% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
10.6%
-29.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 5 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendments filed on 10/27/2025 has been accepted and considered in this office action. Claims 1, 10-12 and 20 have been amended. No claims have been canceled. No claims have been newly added. Claim Objections Claim 2, 8, 13 and 19 objected to because of the following informalities: Claims 2, 8, 13 and 19 has antecedent basis issue where the phrase “the actual insider attack image” needs to be changed to “an actual insider attack image”. Appropriate correction is required. Response to Arguments The reply filed on 10/27/2025 have been entered. Applicant's arguments filed 10/27/2025 have been fully considered but they are not persuasive. Applicant argues that Gayathri generates daily grayscale images and classifies them individually, and thus allegedly does not teach or suggest assigning temporally distinct grayscale images to RGB channels for unified color behavior image evaluation. Applicant further argues that Qiao/Jian relate to malware and combine different feature types. Thus, Applicant asserts neither reference contemplates or suggests the user of temporally distinct grayscale images are assigned to RGB channels for a unified color behavior image evaluation. Applicant’s arguments are against the references individually and thus are unpersuasive. Applicant’s argument is premised on each individual reference not teaching the totality of certain limitations and, therefore, fails to address the obviousness rationale as set forth. Applicant additionally asserts the proposed combination would be “inoperative for the intended purpose.” These arguments are not persuasive. Applicant has provided no reasoned explanation, let alone any evidence, as to how the proposed modifications to Gayathri, as set forth in the prior art rejection, would suddenly render Gayathri unsuitable or inoperative for their intended purpose. Applicant’s argument merely summarizes or characterizes, at an extremely high level, what each reference teaches. There is no identification as to what Gayathri’s “intended purpose” is, or how the modifications proposed by the prior art rejection would result in a system that fails to achieve Gayathri’s “intended purpose”. The examiner finds that Gayathri’s purpose is insider threat classification using image-based representation derived from user activity logs. Modifying Gayathri to multiplex multiple temporally distinct grayscale images into a single RGB representation preserves that purpose and operation. The modified Gayathri system still classifies insider behavior for insider attack detection, it simply uses a color based image classification and multi-channel representation as taught by Qiao/Jian. Therefore, the examiner finds the proposed modification does not render Gayathri unsatisfactory for its intended purpose. Gayathri teaches preprocessing the CERT log files to obtain a feature vector of each user for each day (Gayathri, Section 3.2) and further teaches representing the feature vector as grayscale images. Gayathri teaches the feature vectors are used for feature representation for user-per-day granularity. Thus, for a given insider/user, Gayathri yields a plurality of grayscale behavior images across multiple days/time periods (e.g., Day t-2, Day t-1, Day t). Accordingly, Gayathri teaches the limitation of “forming… a plurality of grayscale behavior images, each corresponding to behavior features derived from a respective time period” and also teaches selecting a first grayscale image corresponding to first prior time period, a second grayscale image corresponding to a second prior time period, and a third grayscale image corresponding to a current time period by selecting three day-based grayscale images for that same user. Under Broadest reasonable interpretation (BRI), selecting “first,” “second,” and “current” time period is taught by Gayathri’s analysis of per user daily output. A person of ordinary skill in the art would recognize that a sequence of three consecutive days of Gayathri’s images constitutes the temporal distinctness recited in the claim. The claim’s “semantic” representation of time is merely a result of the specific data selection (daily logs) already performed in the base system of Gayathri. Applicant contends that Qiao/Jian are limited to the context of malware and code features. However, the examiner found that a person of ordinary skill in the art would recognize these references’ architectural and methodology transcends the literal categorization of underlying data subject to analysis and is equally applicable to other forms, topics, or venues of underlying data. Qiao teaches constructing three matrices and converting them into a single three-channel (RGB) image for classification by a neural network (Qiao, Section III (A-D)). Jian likewise teaches that three matrices are respectively used as the three channels of an RGB image and merged into a visualized color image (Jian, Section 3.2). The examiner found that a person of ordinary skill in the art would readily appreciate several commonly known facts: i) greyscale images represent color in each pixel with a single 8-bit value (255 possible) ranging from 0 (black) to 255 (white); ii) RGB images represent color in each pixel with an ordered tuple of three 8-bit values (255 possible for each) that control the amount of red in the first value from 0 (black) to 255 (“full” red), amount of green in the second value from 0 (black) to 255 (“full” green), and amount of blue in the third value from 0 (black) to 255 (“full” blue). The examiner further found, that in light of Qiao and Jian, a person of ordinary skill would readily recognize that AI models, machine learning, and anomaly detection has known benefits when performed on RGB images rather than on greyscale images. A person of ordinary skill in the art starting with Gayathri’s “daily” grayscale images, would be motivated to improve the detection model by following the teachings of Qiao/Jian. These references provide the technical blueprint for taking three distinct grayscale matrices and merging them in to a single 256 x 256 x 3 three-channel RGB image to improve deep learning performance. As in the modified Gayathri system, three grayscale images of the same insider, which each correspond to different time periods (e.g., current day-2, current day-1, current day), are combined into a single RGB image by assigning the first prior image to a first channel (red), the second prior image to a second channel (green), and the current image to a third channel (blue), yielding a composite color behavior image that is then applied to an image classification model. In this composite image, each channel represents insider behavior from a different time period because each channel is populated with a grayscale behavior image derived from the corresponding time period’s behavior features for that insider (as taught by Gayathri’s per-user-per-day greyscale image generation). The resulting RGB image comprises pixel positions having red, green, and blue components derived from corresponding positions in the three grayscale images by the nature of RGB channel combination as taught by Qiao/Jian. Therefore, the applicant’s argument is ultimately not persuasive Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 9-12 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Gayathri (Gayathri, R.G.; Sajjanhar, A.; Xiang, Y. Image-Based Feature Representation for Insider Threat Classification. Appl. Sci. 2020, 10, 4945.) in view of Qiao (Qiao, Yanchen et al. “A Multi-Channel Visualization Method for Malware Classification Based on Deep Learning.” 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). IEEE, 2019. 757–762.) in view of Jian (Yifei Jian, Hongbo Kuang, Chenglong Ren, Zicheng Ma, Haizhou Wang, A novel framework for image-based malware detection with a deep neural network, Computers & Security, Volume 109, 2021, 102400, ISSN 0167 4048, https://doi.org/10.1016/j.cose.2021.102400.) Regarding Claim 1, Gayathri teaches: logging, by a processing resource, activities of an insider performed in relation to a communication network to yield logged activities (Gayathri, [Section 3.1], The dataset comprises of various log files with information regarding the user’s logon/logoff details (logon.csv), details of the user browsing history (http.csv), file access patterns (files.csv), external device usage within the organization (device.csv), and email communications sent/received by the user (email.csv)); extracting, by the processing resource, a set of defined behavioral features from the logged activities (Gayathri, [Section 3], Feature vectors that represent user-behavior are constructed by extracting relevant attributes from the usage logs); forming, by the processing resource, a plurality of grayscale behavioral images, each corresponding to behavioral features derived from a respective time period; (Gayathri teaches preprocessing the CERT log files to obtain a feature vector of each user for each day (Gayathri, Section 3.2) and further teaches representing the feature vector as grayscale images. Gayathri teaches the feature vectors are used for feature representation for user-per-day granularity. Thus, for a given insider/user, Gayathri yields a plurality of grayscale behavior images across multiple days/time periods (e.g., Day t-2, Day t-1, Day t).) selecting, by the processing resource, a first grayscale behavioral image corresponding to a first prior time period, a second grayscale behavioral image corresponding to a second prior time period, and a third grayscale behavioral image corresponding to a current time period; (Gayathri teaches preprocessing the CERT log files to obtain a feature vector of each user for each day (Gayathri, Section 3.2) and further teaches representing the feature vector as grayscale images. Gayathri teaches the feature vectors are used for feature representation for user-per-day granularity. Thus, for a given insider/user, Gayathri yields and selects a plurality of grayscale behavior images across multiple days/time periods (e.g., Day t-2, Day t-1, Day t).); applying, by the processing resource, an insider attack classification model to the behavior image to determine whether the behavior image indicates an insider attack; wherein the behavior image indicates an insider attack (Gayathri, [section 3.3] provides for analyzing the grayscale image dataset and using a classification model to classify the image (and user whose behavior the image is based on), as malicious or non malicious; see title, section 1, for “malicious” being an insider attack); wherein each color channel of the behavioral image semantically represents insider behavior for a different time period (Gayathri, Section 3.2 teaches that feature vectors representing insider/user behavior are generated on per-user-per-day basis and that each feature vector is represented as a grayscale image. Thus, for a given insider/user, Gayathri represents grayscale behavior images across multiple days/time periods (e.g., Day t-2, Day t-1, Day t).); Gayathri does not explicitly teach: wherein the behavior image comprises the yielded color behavioral image; combining, by the processing resource, the first, second, and third grayscale behavioral images into a single color behavioral image by assigning: the first grayscale behavioral image to a red channel, the second grayscale behavioral image to a green channel the third grayscale behavioral image to a blue channel ; and further wherein the color behavioral image comprises a plurality of pixel positions each pixel having red, green, and blue components derived from corresponding positions in the three grayscale behavioral images; However, Qiao teaches: combining, by the processing resource, the first, second, and third grayscale behavioral images into a single color behavioral image (Qiao, entire paper; section IIIA for first grayscale matrix, IIIB for second grayscale matrix, IIIC for third grayscale matrix; section IIID, converting the three grayscale matrixes to a single 3-channel image (its “RGB channels”, see section 1).); wherein the behavior image comprises the yielded color behavioral image (Qiao, section IV provides for generating the color image and performing malicious/benign classification based on the color image using machine learning algorithms). Both Gayathri and Qiao are in the same field of endeavor, of improving cybersecurity through advanced anomaly detection methods by extracting features and representing them as image data structures to utilize image based classification algorithms for non-image data. It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri to incorporate the teaching of Qiao, which extracts different feature cross-sections into separate grayscale matrixes to then combine into a singular color image to perform image based classification using a color image classifier, which would result in a modified Gayathri system, one that would take multiple different cross-sections of user behavior feature sets and create different grayscale images, to then combine into a singular RGB image and classify with a color image classifier. One would be motivated to perform such a modification, because color image classification has known advantages over singular grayscale image classification (see, e.g., Qiao, section 1). Gayathri/Qiao does not explicitly teach; However, Jian teaches: combining, by the processing resource, the first, second, and third grayscale behavioral images into a single color behavioral image by assigning (Jian, section 3, describes the three matrices are respectively used as the three channels of a RGB image and merged into a visualized color image): i) the first grayscale behavioral image to a red channel, ii) the second grayscale behavioral image to a green channel; iii)the third grayscale behavioral image to a blue channel (Jian, Section 3.2 discloses using three matrices as the three RGB channels of an image; in an RGB image, the channels are inherently red, green, and blue, such that assigning a first matrix to the red channel, a second to green, and a third to blue is an obvious implementation of Jian’s RGB channel combination.) and further wherein the color behavioral image comprises a plurality of pixel positions each pixel having red, green, and blue components derived from corresponding positions in the three grayscale behavioral images (Jian, section 3.2, describes that the binary file matrix corresponding to each sample is used as the first visualization channel, the byte word vector matrix is used as the second visualization channel, and the opcode word vector matrix is used as the third visualization channel. Finally, they are merged into a 256 x 256 x 3 three-channel RGB image) It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri/Qiao to incorporate the teaching of Jian to include the technique of combining grayscale images by assigning each image with either red, green or blue channel respectively to get colored image to provide more useful help for the training of detection model, so as to improve the detection performance. One would be motivated to perform such a modification on Gayathri/Qiao system, to provide more comprehensive representation of data, which can be used to train more effective detection models (see, e.g., Jian, section 1). Regarding claim 9, Gayathri/Qiao/Jian teaches the method of claim 1, Qiao, section IV provides for generating the color image and performing malicious/benign classification based on the color image using machine learning algorithms. Gayathri/Qiao does not explicitly teach; However, Jian teaches: wherein the color behavioral image includes a number of pixel positions each with red, green, and blue components corresponding to the same pixel position in each of the grayscale behavioral image, the first grayscale attack context image, and the second grayscale attack context image. (Jian, section 3.2, describes that the binary file matrix corresponding to each sample is used as the first visualization channel, the byte word vector matrix is used as the second visualization channel, and the opcode word vector matrix is used as the third visualization channel. Finally, they are merged into a 256 x 256 x 3 three-channel RGB image.) It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri/Qiao to incorporate the teaching of Jian to include the technique of combining grayscale images by assigning each image with either red, green or blue channel respectively to get colored image to provide more useful help for the training of detection model, so as to improve the detection performance. One would be motivated to perform such a modification on Gayathri/Qiao system, to provide more comprehensive representation of data, which can be used to train more effective detection models (see, e.g., Jian, section 1). Regarding Claim 10, Gayathri teaches: logging activities of an insider performed in relation to a communication network to yield logged activities (Gayathri, [Section 3.1], The dataset comprises of various log files with information regarding the user’s logon/logoff details (logon.csv), details of the user browsing history (http.csv), file access patterns (files.csv), external device usage within the organization (device.csv), and email communications sent/received by the user (email.csv)); extracting a set of defined behavioral features from the logged activities (Gayathri, [Section 3], Feature vectors that represent user-behavior are constructed by extracting relevant attributes from the usage logs); forming a plurality of grayscale behavioral images, each corresponding to behavioral features derived from a respective time period; (Gayathri teaches preprocessing the CERT log files to obtain a feature vector of each user for each day (Gayathri, Section 3.2) and further teaches representing the feature vector as grayscale images. Gayathri teaches the feature vectors are used for feature representation for user-per-day granularity. Thus, for a given insider/user, Gayathri yields a plurality of grayscale behavior images across multiple days/time periods (e.g., Day t-2, Day t-1, Day t).) selecting a first grayscale behavioral image corresponding to a first prior time period, a second grayscale behavioral image corresponding to a second prior time period, and a third grayscale behavioral image corresponding to a current time period; (Gayathri teaches preprocessing the CERT log files to obtain a feature vector of each user for each day (Gayathri, Section 3.2) and further teaches representing the feature vector as grayscale images. Gayathri teaches the feature vectors are used for feature representation for user-per-day granularity. Thus, for a given insider/user, Gayathri yields and selects a plurality of grayscale behavior images across multiple days/time periods (e.g., Day t-2, Day t-1, Day t).); applying an insider attack classification model to the behavior image to determine whether the behavior image indicates an insider attack; wherein the behavior image indicates an insider attack (Gayathri, [section 3.3] provides for analyzing the grayscale image dataset and using a classification model to classify the image (and user whose behavior the image is based on), as malicious or non malicious; see title, section 1, for “malicious” being an insider attack); wherein each color channel of the behavioral image semantically represents insider behavior for a different time period (Gayathri, Section 3.2 teaches that feature vectors representing insider/user behavior are generated on per-user-per-day basis and that each feature vector is represented as a grayscale image. Thus, for a given insider/user, Gayathri represents grayscale behavior images across multiple days/time periods (e.g., Day t-2, Day t-1, Day t).); Gayathri does not explicitly teach: wherein the behavior image comprises the yielded color behavioral image; combining the first, second, and third grayscale behavioral images into a single color behavioral image by assigning: the first grayscale behavioral image to a red channel, the second grayscale behavioral image to a green channel the third grayscale behavioral image to a blue channel; and further wherein the color behavioral image comprises a plurality of pixel positions each pixel having red, green, and blue components derived from corresponding positions in the three grayscale behavioral images; However, Qiao teaches: combining, by the processing resource, the first, second, and third grayscale behavioral images into a single color behavioral image (Qiao, entire paper; section IIIA for first grayscale matrix, IIIB for second grayscale matrix, IIIC for third grayscale matrix; section IIID, converting the three grayscale matrixes to a single 3-channel image (its “RGB channels”, see section 1).); wherein the behavior image comprises the yielded color behavioral image (Qiao, section IV provides for generating the color image and performing malicious/benign classification based on the color image using machine learning algorithms). Both Gayathri and Qiao are in the same field of endeavor, of improving cybersecurity through advanced anomaly detection methods by extracting features and representing them as image data structures to utilize image based classification algorithms for non-image data. It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri to incorporate the teaching of Qiao, which extracts different feature cross-sections into separate grayscale matrixes to then combine into a singular color image to perform image based classification using a color image classifier, which would result in a modified Gayathri system, one that would take multiple different cross-sections of user behavior feature sets and create different grayscale images, to then combine into a singular RGB image and classify with a color image classifier. One would be motivated to perform such a modification, because color image classification has known advantages over singular grayscale image classification (see, e.g., Qiao, section 1). Gayathri/Qiao does not explicitly teach; However, Jian teaches: Combining the first, second, and third grayscale behavioral images into a single color behavioral image by assigning (Jian, section 3, describes the three matrices are respectively used as the three channels of a RGB image and merged into a visualized color image): i) the first grayscale behavioral image to a red channel, ii) the second grayscale behavioral image to a green channel; iii)the third grayscale behavioral image to a blue channel (Jian, Section 3.2 discloses using three matrices as the three RGB channels of an image; in an RGB image, the channels are inherently red, green, and blue, such that assigning a first matrix to the red channel, a second to green, and a third to blue is an obvious implementation of Jian’s RGB channel combination.) and further wherein the color behavioral image comprises a plurality of pixel positions each pixel having red, green, and blue components derived from corresponding positions in the three grayscale behavioral images (Jian, section 3.2, describes that the binary file matrix corresponding to each sample is used as the first visualization channel, the byte word vector matrix is used as the second visualization channel, and the opcode word vector matrix is used as the third visualization channel. Finally, they are merged into a 256 x 256 x 3 three-channel RGB image) It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri/Qiao to incorporate the teaching of Jian to include the technique of combining grayscale images by assigning each image with either red, green or blue channel respectively to get colored image to provide more useful help for the training of detection model, so as to improve the detection performance. One would be motivated to perform such a modification on Gayathri/Qiao system, to provide more comprehensive representation of data, which can be used to train more effective detection models (see, e.g., Jian, section 1). Regarding claim 11, Gayathri/Qiao/Jian teaches the computer readable medium of claim 10, Qiao, section IV provides for generating the color image and performing malicious/benign classification based on the color image using machine learning algorithms. Gayathri/Qiao does not explicitly teach; However, Jian teaches: wherein the color behavioral image includes a number of pixel positions each with red, green, and blue components corresponding to the same pixel position in each of the grayscale behavioral image, the first grayscale attack context image, and the second grayscale attack context image. (Jian, section 3.2, describes that the binary file matrix corresponding to each sample is used as the first visualization channel, the byte word vector matrix is used as the second visualization channel, and the opcode word vector matrix is used as the third visualization channel. Finally, they are merged into a 256 x 256 x 3 three-channel RGB image.) It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri/Qiao to incorporate the teaching of Jian to include the technique of combining grayscale images by assigning each image with either red, green or blue channel respectively to get colored image to provide more useful help for the training of detection model, so as to improve the detection performance. One would be motivated to perform such a modification on Gayathri/Qiao system, to provide more comprehensive representation of data, which can be used to train more effective detection models (see, e.g., Jian, section 1). Regarding Claim 12, Gayathri teaches: log activities of an insider performed in relation to a communication network to yield logged activities (Gayathri, [Section 3.1], The dataset comprises of various log files with information regarding the user’s logon/logoff details (logon.csv), details of the user browsing history (http.csv), file access patterns (files.csv), external device usage within the organization (device.csv), and email communications sent/received by the user (email.csv)); extract a set of defined behavioral features from the logged activities (Gayathri, [Section 3], Feature vectors that represent user-behavior are constructed by extracting relevant attributes from the usage logs); form a plurality of grayscale behavioral images, each corresponding to behavioral features derived from a respective time period; (Gayathri teaches preprocessing the CERT log files to obtain a feature vector of each user for each day (Gayathri, Section 3.2) and further teaches representing the feature vector as grayscale images. Gayathri teaches the feature vectors are used for feature representation for user-per-day granularity. Thus, for a given insider/user, Gayathri yields a plurality of grayscale behavior images across multiple days/time periods (e.g., Day t-2, Day t-1, Day t).) select a first grayscale behavioral image corresponding to a first prior time period, a second grayscale behavioral image corresponding to a second prior time period, and a third grayscale behavioral image corresponding to a current time period; (Gayathri teaches preprocessing the CERT log files to obtain a feature vector of each user for each day (Gayathri, Section 3.2) and further teaches representing the feature vector as grayscale images. Gayathri teaches the feature vectors are used for feature representation for user-per-day granularity. Thus, for a given insider/user, Gayathri yields and selects a plurality of grayscale behavior images across multiple days/time periods (e.g., Day t-2, Day t-1, Day t).); apply an insider attack classification model to the behavior image to determine whether the behavior image indicates an insider attack; wherein the behavior image indicates an insider attack (Gayathri, [section 3.3] provides for analyzing the grayscale image dataset and using a classification model to classify the image (and user whose behavior the image is based on), as malicious or non malicious; see title, section 1, for “malicious” being an insider attack); wherein each color channel of the behavioral image semantically represents insider behavior for a different time period (Gayathri, Section 3.2 teaches that feature vectors representing insider/user behavior are generated on per-user-per-day basis and that each feature vector is represented as a grayscale image. Thus, for a given insider/user, Gayathri represents grayscale behavior images across multiple days/time periods (e.g., Day t-2, Day t-1, Day t).); Gayathri does not explicitly teach: wherein the behavior image comprises the yielded color behavioral image; combining the first, second, and third grayscale behavioral images into a single color behavioral image by assigning: the first grayscale behavioral image to a red channel, the second grayscale behavioral image to a green channel the third grayscale behavioral image to a blue channel; and further wherein the color behavioral image comprises a plurality of pixel positions each pixel having red, green, and blue components derived from corresponding positions in the three grayscale behavioral images; However, Qiao teaches: Combining the first, second, and third grayscale behavioral images into a single color behavioral image (Qiao, entire paper; section IIIA for first grayscale matrix, IIIB for second grayscale matrix, IIIC for third grayscale matrix; section IIID, converting the three grayscale matrixes to a single 3-channel image (its “RGB channels”, see section 1).); wherein the behavior image comprises the yielded color behavioral image (Qiao, section IV provides for generating the color image and performing malicious/benign classification based on the color image using machine learning algorithms). Both Gayathri and Qiao are in the same field of endeavor, of improving cybersecurity through advanced anomaly detection methods by extracting features and representing them as image data structures to utilize image based classification algorithms for non-image data. It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri to incorporate the teaching of Qiao, which extracts different feature cross-sections into separate grayscale matrixes to then combine into a singular color image to perform image based classification using a color image classifier, which would result in a modified Gayathri system, one that would take multiple different cross-sections of user behavior feature sets and create different grayscale images, to then combine into a singular RGB image and classify with a color image classifier. One would be motivated to perform such a modification, because color image classification has known advantages over singular grayscale image classification (see, e.g., Qiao, section 1). Gayathri/Qiao does not explicitly teach; However, Jian teaches: Combining the first, second, and third grayscale behavioral images into a single color behavioral image by assigning (Jian, section 3, describes the three matrices are respectively used as the three channels of a RGB image and merged into a visualized color image): i) the first grayscale behavioral image to a red channel, ii) the second grayscale behavioral image to a green channel; iii)the third grayscale behavioral image to a blue channel (Jian, Section 3.2 discloses using three matrices as the three RGB channels of an image; in an RGB image, the channels are inherently red, green, and blue, such that assigning a first matrix to the red channel, a second to green, and a third to blue is an obvious implementation of Jian’s RGB channel combination.) and further wherein the color behavioral image comprises a plurality of pixel positions each pixel having red, green, and blue components derived from corresponding positions in the three grayscale behavioral images (Jian, section 3.2, describes that the binary file matrix corresponding to each sample is used as the first visualization channel, the byte word vector matrix is used as the second visualization channel, and the opcode word vector matrix is used as the third visualization channel. Finally, they are merged into a 256 x 256 x 3 three-channel RGB image) It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri/Qiao to incorporate the teaching of Jian to include the technique of combining grayscale images by assigning each image with either red, green or blue channel respectively to get colored image to provide more useful help for the training of detection model, so as to improve the detection performance. One would be motivated to perform such a modification on Gayathri/Qiao system, to provide more comprehensive representation of data, which can be used to train more effective detection models (see, e.g., Jian, section 1). Regarding claim 20, Gayathri/Qiao teaches the system of claim 12, Gayathri/Qiao does not explicitly teach: wherein the color behavioral image includes a number of pixel positions each with red, green, and blue components corresponding to the same pixel position in each of the grayscale behavioral image, the first grayscale attack context image, and the second grayscale attack context image. The Jian teaches: wherein the color behavioral image includes a number of pixel positions each with red, green, and blue components corresponding to the same pixel position in each of the grayscale behavioral image, the first grayscale attack context image, and the second grayscale attack context image. (Jian, section 3.2, describes that the binary file matrix corresponding to each sample is used as the first visualization channel, the byte word vector matrix is used as the second visualization channel, and the opcode word vector matrix is used as the third visualization channel. Finally, they are merged into a 256 x 256 x 3 three-channel RGB image.) It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri/Qiao to incorporate the teaching of Jian to include the technique of combining grayscale images by assigning each image with either red, green or blue channel respectively to get colored image to provide more useful help for the training of detection model, so as to improve the detection performance. One would be motivated to perform such a modification on Gayathri/Qiao system, to provide more comprehensive representation of data, which can be used to train more effective detection models (see, e.g., Jian, section 1). Claims 2 and 3 are rejected under 35 U.S.C. 103 as being unpatentable over Gayathri in view of Qiao in view of Jian, further in view of Kumar (US 20210217135 A1). Regarding Claim 2, Gayathri/Qiao/Jian teaches the method of claim 1, Gayathri/Qiao/Jian does not explicitly teach: generating, by the processing resource, at least one synthetic image using the image; The Kumar teaches: generating, by the processing resource, at least one synthetic image using the actual image (Kumar, para [0005], discloses that the method which includes generating a synthetic image, by applying targeted modifications to one or more features of the original image.) The Kumar does not explicitly teach: wherein, the machine learning data is insider attack images. However, Gayathri/Qiao/Jian teaches: wherein, the machine learning data is insider attack images. (Gayathri, section 3.2, discloses that the use of deep learning on the grayscale image to detect malicious insider, see title, section 1 where the “malicious” being an insider attack; Qiao, section 1, discloses a multi-channel visualization method for malware classification based on deep learning). Given the common knowledge in the field regarding the implementation of such methods via a processing resource. It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri/Qiao/Jian’s system to incorporate the teaching of Kumar to include the technique to generate synthetic data using original image which can be used to automate image measurement to determine image measurements with variations. One would be motivated to perform such a modification on Gayathri/Qiao/Jian’s system, to generate more synthetic training datasets to train the system for image classification (see, e.g., Kumar, paragraph [0005]). Regarding Claim 3, Gayathri/Qiao/Jian/Kumar teaches the method of claim 2, Gayathri/Qiao/Jian does not explicitly teach: training, by the processing resource, the classification model using a combination of images including at least the actual image and the at least one synthetic image. The Kumar teaches: training, by the processing resource, the insider attack classification model using a combination of images including at least the actual insider attack image and the at least one synthetic insider attack image (Kumar, [US20210217135 A1], para [0005], discloses that the combination of original and synthetic images are used to train the machine learning model). The Kumar does not teach: wherein the classification model is attack classification model, wherein the images are insider attack images. However, Gayathri teaches: wherein the classification model is insider attack classification model (Gayathri, section 3, discloses that the image classification model is for insider threat detection) wherein the images are insider attack images (Gayathri, section 3.2, discloses that the use of deep learning on the grayscale image to detect malicious insider, see title, section 1 where the “malicious” being an insider attack). Given the common knowledge in the field regarding the implementation of such methods via a processing resource. It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri/Qiao/Jian’s system to incorporate the teaching of Kumar to include the technique to train the model using the combination of synthetic and original image which can be used to automate image measurement to determine image measurements with variations. One would be motivated to perform such a modification on Gayathri/Qiao/Jian’s system, to reduce errors and to improve the image classification system enhancing overall intrusion detection system (see, e.g., Kumar, paragraph [0005]). Claims 4 and 5 are rejected under 35 U.S.C. 103 as being unpatentable over Gayathri in view of Qiao in view of Jian in view of Kumar further in view of Huynh (US 11106903 B1). Regarding Claim 4, Gayathri/Qiao/Jian/Kumar teaches the method of claim 2, at least one synthetic insider attack image is a first insider attack image Gayathri/Qiao/Jian/Kumar does not explicitly teach: applying, by the processing resource, a generative adversarial network to the actual image to generate at least a second synthetic image; The Huynh teach: applying, by the processing resource, a generative adversarial network to the actual image to generate at least a second synthetic image (Huynh, Column 3, lines 16-17, discloses in GANs, a generator is trained to generate synthetic image data based on input image data.). The Huynh does not teach: wherein the image is insider attack image; However, Gayathri teaches: wherein the images are insider attack images (Gayathri, section 3.2, discloses that the use of deep learning on the grayscale image to detect malicious insider, see title, section 1 where the “malicious” being an insider attack). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri/Qiao/Jian/Kumar’s system to incorporate the teaching of Huynh to include the technique to generate synthetic image using the real image with Generative adversarial neural network (GAN). One would be motivated to perform such a modification on Gayathri/Qiao/Jian/Kumar’s system, to generate improve the quality of the synthetic, non-visible data which would be beneficial for training a model for intrusion detection (see, e.g., Huynh, Column 2). Regarding Claim 5, Gayathri/Qiao/Jian/Kumar/Huynh teaches the method of claim 4, Gayathri/Qiao/Jian/Huynh does not explicitly teach: training, by the processing resource, the classification model using a combination of images including at least the actual image, the first synthetic image, and the second image; The Kumar teach: training, by the processing resource, the classification model using a combination of images including at least the actual image, the first synthetic image, and the second image (Kumar, paragraph [0005], discloses that the combination of both the synthetic images and original images are used to train the machine learning model.) The Kumar does not teach: wherein the classification model is insider attack classification model, wherein the images are insider attack images. However, Gayathri teaches: wherein the classification model is attack classification model (Gayathri, section 3, discloses that the image classification model is for insider threat detection) wherein the images are insider attack images (Gayathri, section 3.2, discloses that the use of deep learning on the grayscale image to detect malicious insider, see title, section 1 where the “malicious” being an insider attack). Given the common knowledge in the field regarding the implementation of such methods via a processing resource. It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri/Qiao/Jian/Huynh’s system to incorporate the teaching of Kumar to include the technique to train the model using newly generated data which can be synthetic, real or a combination to generate training data that can be used to train a system in short period of time and with a high degree of control over variations in the training data. One would be motivated to perform such a modification on Gayathri/Qiao/Jian/Huynh’s system, to use a better data set to adjust the result using customizable or defined data sets to control the results for intrusion detection (see, e.g., Kumar, paragraph [0005]). Claims 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Gayathri in view of Qiao in view of Jian in view of Kumar and further in view of Walters (US 10540798 B1). Regarding Claim 6, Gayathri/Qiao/Jian/Kumar teaches the method of claim 2, at least one synthetic insider attack image is a first insider attack image Gayathri/Qiao/Jian/Kumar does not explicitly teach: applying, by the processing resource, a generative adversarial network to the first synthetic image to generate at least a second synthetic image; The Walters teaches: applying, by the processing resource, a generative adversarial network to the first synthetic insider attack image to generate at least a second synthetic insider attack image. (Walters, column 4, lines 10-16, In particular, the first model may receive an input of the template; generate, with a generative model, a first synthetic image with the template to output the first synthetic image. The subsequent model in the series of models may receive the first synthetic image as an input, generate a second synthetic image with the first synthetic image and output a second synthetic image.) The Walters does not teach: wherein the images are insider attack images. However, Gayathri teaches: wherein the images are insider attack images (Gayathri, section 3.2, discloses that the use of deep learning on the grayscale image to detect malicious insider, see title, section 1 where the “malicious” being an insider attack). Given the common knowledge in the field regarding the implementation of such methods via a processing resource. It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri/Qiao/Jian/Kumar’s system to incorporate the teaching of Walters to include the technique to use GAN to generate a second synthetic image using the first synthetic image as input. By employing this technique, the logic circuitry may also train an overlay model to combine the two or more different synthetic images from the set of GANs with a template to create the new image. One would be motivated to perform such a modification on Gayathri/Qiao/Jian/Kumar’s system, to generate diverse collection of synthetic images which would be beneficial to train the model for intrusion detection (see, e.g., Walters, column 1). Regarding Claim 7, Gayathri/Qiao/Jian/Kumar/Walters teaches the method of claim 6, Gayathri/Qiao/Jian/Walters does not explicitly teach: training, by the processing resource, the classification model using a combination of images including at least the actual image, the first synthetic image, and the second image; The Kumar teaches: training, by the processing resource, the classification model using a combination of images including at least the actual image, the first synthetic image, and the second image (Kumar, paragraph [0005], discloses that the combination of both the synthetic images and original images are used to train the machine learning model.) The Kumar does not teach: wherein the classification model is insider attack classification model, wherein the images are insider attack images. However, Gayathri teaches: wherein the classification model is attack classification model (Gayathri, section 3, discloses that the image classification model is for insider threat detection) wherein the images are insider attack images (Gayathri, section 3.2, discloses that the use of deep learning on the grayscale image to detect malicious insider, see title, section 1 where the “malicious” being an insider attack). Given the common knowledge in the field regarding the implementation of such methods via a processing resource. It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri/Qiao/Jian/Kumar’s system to include the technique to train the model using newly generated data which can be synthetic, real or a combination to generate training data that can be used to train a system in short period of time and with a high degree of control over variations in the training data. One would be motivated to perform such a modification on Gayathri/Qiao/Jian/Kumar’s system, to use a better data set to adjust the result using customizable or defined data sets to control the results for intrusion detection (see, e.g., Kumar, paragraph [0005]). Claims 8 is rejected under 35 U.S.C. 103 as being unpatentable over Gayathri in view of Qiao in view of Jian in view of Jones (WO2018/045067 A1). Regarding Claim 8, Gayathri/Qiao/Jian teaches the method of claim 1, Gayathri/Jian does not explicitly teach: receiving, by the processing resource, a corroboration from an expert that the behavioral image indicates the actual insider attack; and wherein the storing the behavioral image as an actual insider attack image is based upon a combination of the indication of the actual insider attack from the insider attack classification model and the corroboration from the expert. The Jones teaches: receiving, by the processing resource, a corroboration from an expert that the behavioral image indicates the actual insider attack; (Jones, [WO2018/045067 A1], page 38, lines 22-29, teaches that the threat analysis system updates the threat indicator database based on any real threats identified, such as those indicated by collected memory information or forensic analysis by an analyst. The analyst may confirm the existence of a real threat based on system information, thereby providing expert corroboration) storing the behavioral image as an actual insider attack image is based upon a combination of the indication of the actual insider attack from the insider attack classification model and the corroboration from the expert (Jones, [WO2018/045067 A1], page 38, lines 13-17, teaches that a system administrator or analyst may use a threat notification report to identify files further investigation; Page 39, lines 24-25, further teaches that after forensic analysis confirms a real threat, a signature may be generated and added to the known threat indicator database). Jones does not teach: wherein the behavior image comprises the yielded color behavior image. However, Qiao teaches: wherein the behavior image comprises the yielded color behavior image (Qiao, section IV provides for generating the color image and performing malicious/benign classification based on the color image using machine learning algorithms). Given the common knowledge in the field regarding the implementation of such methods via a processing resource. It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri/Qiao/Jian’s system to incorporate the teaching of Jones to include the technique of corroboration from an expert to verify any real threats in the system, so that the threat analysis module may identify the vulnerabilities and install the appropriate patches to ensure the system is up-to-date and to plug know vulnerabilities. One would be motivated to perform such a modification on Gayathri/Qiao/Jian’s system, to combine automated and expert-driven analyses for improving the accuracy and reliability of threat detection (see, e.g., Jones, page 24). Claims 13 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Gayathri in view of Qiao in view of Jian in view of Kumar. Regarding Claim 13, Gayathri/Qiao/Jian teaches the system of claim 12, Gayathri/Qiao/Jian does not explicitly teach: generate at least one synthetic image using the actual image; The Kumar teaches: generate at least one synthetic image using the actual image (Kumar, para [0005], discloses that the method which includes generating a synthetic image, by applying targeted modifications to one or more features of the original image.) The Kumar does not explicitly teach: wherein, the machine learning data is insider attack images. However, Gayathri/Qiao teaches: wherein, the machine learning data is insider attack images. (Gayathri, section 3.2, discloses that the use of deep learning on the grayscale image to detect malicious insider, see title, section 1 where the “malicious” being an insider attack; Qiao, section 1, discloses a multi-channel visualization method for malware classification based on deep learning). Given the common knowledge in the field regarding the implementation of such methods via a processing resource. It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri/Qiao’s system to incorporate the teaching of Kumar to include the technique to generate synthetic data using original image which can be used to automate image measurement to determine image measurements with variations. One would be motivated to perform such a modification on Gayathri/Qiao’s system, to generate more synthetic training datasets to train the system for image classification (see, e.g., Kumar, paragraph [0005]). Regarding Claim 14, Gayathri/Qiao/Jian/Kumar teaches the system of claim 13, Gayathri/Qiao does not explicitly teach: train the classification model using a combination of images including at least the actual image and the at least one synthetic image. The Kumar teaches: train the classification model using a combination of images including at least the actual image and the at least one synthetic image (Kumar, [US20210217135 A1], para [0005], discloses that the combination of original and synthetic images are used to train the machine learning model). The Kumar does not teach: wherein the classification model is attack classification model, wherein the images are insider attack images. However, Gayathri teaches: wherein the classification model is insider attack classification model (Gayathri, section 3, discloses that the image classification model is for insider threat detection) wherein the images are insider attack images (Gayathri, section 3.2, discloses that the use of deep learning on the grayscale image to detect malicious insider, see title, section 1 where the “malicious” being an insider attack). Given the common knowledge in the field regarding the implementation of such methods via a processing resource. It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri/Qiao’s system to incorporate the teaching of Kumar to include the technique to train the model using the combination of synthetic and original image which can be used to automate image measurement to determine image measurements with variations. One would be motivated to perform such a modification on Gayathri/Qiao’s system, to reduce errors and to improve the image classification system enhancing overall intrusion detection system (see, e.g., Kumar, paragraph [0005]). Claims 15 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Gayathri in view of Qiao in view of Jian in view of Kumar and further in view of Huynh (US 11106903 B1). Regarding Claim 15, Gayathri/Qiao/Kumar teaches the system of claim 13, wherein the at least one synthetic insider attack image is a first insider attack image Gayathri/Qiao/Kumar does not explicitly teach: wherein the computer readable medium further includes instructions, which when executed by the processing resource, cause the processing resource to: apply a generative adversarial network to the actual image to generate at least a second synthetic image; The Huynh teach: wherein the computer readable medium further includes instructions, which when executed by the processing resource, cause the processing resource to: apply a generative adversarial network to the actual image to generate at least a second synthetic image (Huynh, column 3, lines 16-17, In GANs, a generator is trained to generate synthetic image data based on input image data.). The Huynh does not teach: wherein the image is insider attack image; However, Gayathri teaches: wherein the images are insider attack images (Gayathri, section 3.2, discloses that the use of deep learning on the grayscale image to detect malicious insider, see title, section 1 where the “malicious” being an insider attack). Given the common knowledge in the field regarding the implementation of such methods via a computer readable medium and a processing resource. It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri/Qiao/Kumar to incorporate the teaching of Huynh to include the technique to generate synthetic image using the real image with Generative adversarial neural network (GAN). One would be motivated to perform such a modification on Gayathri/Qiao/Kumar system, to generate improve the quality of the synthetic, non-visible data which would be beneficial for training a model for intrusion detection (see, e.g., Huynh, Column 2). Regarding Claim 16, Gayathri/Qiao/Kumar/Huynh teaches the system of claim 15, Gayathri/Qiao/Huynh does not explicitly teach: the computer readable medium further includes instructions, which when executed by the processing resource, cause the processing resource to: train the classification model using a combination of images including at least the actual image, the first synthetic image, and the second image; The Kumar teaches: the computer readable medium further includes instructions, which when executed by the processing resource, cause the processing resource to: train the classification model using a combination of images including at least the actual image, the first synthetic image, and the second image (Kumar, paragraph [0005], discloses that the combination of both the synthetic images and original images are used to train the machine learning model.) The Kumar does not teach: wherein the classification model is insider attack classification model, wherein the images are insider attack images. However, Gayathri teaches: wherein the classification model is attack classification model (Gayathri, section 3, discloses that the image classification model is for insider threat detection) wherein the images are insider attack images (Gayathri, section 3.2, discloses that the use of deep learning on the grayscale image to detect malicious insider, see title, section 1 where the “malicious” being an insider attack). Given the common knowledge in the field regarding the implementation of such methods via a computer readable medium and a processing resource. It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri/Qiao/Huynh to incorporate the teaching of Kumar to include the technique to train the model using newly generated data which can be synthetic, real or a combination to generate training data that can be used to train a system in short period of time and with a high degree of control over variations in the training data. One would be motivated to perform such a modification on Gayathri/Qiao/Huynh system, to use a better data set to adjust the result using customizable or defined data sets to control the results for intrusion detection (see, e.g., Kumar, paragraph [0005]). Claims 17 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Gayathri in view of Qiao, in view of Jian further in view of Kumar and furthermore in view of Walters (US 10540798 B1). Regarding Claim 17, Gayathri/Qiao/Jian/Kumar teaches the system of claim 13, Gayathri/Qiao/Jian/Kumar does not explicitly teach: at least one synthetic image is a first image, and wherein the computer readable medium further includes instructions, which when executed by the processing resource, cause the processing resource to: apply a generative adversarial network to the first synthetic image to generate at least a second synthetic image; The Walters teaches: at least one synthetic image is a first image, and wherein the computer readable medium further includes instructions, which when executed by the processing resource, cause the processing resource to: apply a generative adversarial network to the first synthetic image to generate at least a second synthetic image (Walters, [US 10540798 B1], column 4, lines 10-16, In particular, the first model may receive an input of the template; generate, with a generative model, a first synthetic image with the template to output the first synthetic image. The subsequent model in the series of models may receive the first synthetic image as an input, generate a second synthetic image with the first synthetic image and output a second synthetic image.) The Walters does not teach: wherein the images are insider attack images. However, Gayathri teaches: wherein the images are insider attack images (Gayathri, section 3.2, discloses that the use of deep learning on the grayscale image to detect malicious insider, see title, section 1 where the “malicious” being an insider attack). Given the common knowledge in the field regarding the implementation of such methods via a computer readable medium and a processing resource. It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri/Qiao/Kumar to incorporate the teaching of Walters to include the technique to use GAN to generate a second synthetic image using the first synthetic image as input. By employing this technique, the logic circuitry may also train an overlay model to combine the two or more different synthetic images from the set of GANs with a template to create the new image. One would be motivated to perform such a modification on Gayathri/Qiao/Kumar system, to generate diverse collection of synthetic images which would be beneficial to train the model for intrusion detection (see, e.g., Walters, column 1). Regarding Claim 18, Gayathri/Qiao/Jian/Kumar in view of Walters teaches the system of claim 17, Gayathri/Qiao/Jian/Walters does not explicitly teach: the computer readable medium further includes instructions, which when executed by the processing resource, cause the processing resource to: train the classification model using a combination of images including at least the actual image, the first synthetic image, and the second image. The Kumar teaches: the computer readable medium further includes instructions, which when executed by the processing resource, cause the processing resource to: train the classification model using a combination of images including at least the actual image, the first synthetic image, and the second image (Kumar, paragraph [0005], discloses that the combination of both the synthetic images and original images are used to train the machine learning model.) The Kumar does not teach: wherein the images are insider attack images. However, Gayathri teaches: wherein the images are insider attack images (Gayathri, section 3.2, discloses that the use of deep learning on the grayscale image to detect malicious insider, see title, section 1 where the “malicious” being an insider attack). Given the common knowledge in the field regarding the implementation of such methods via a processing resource. It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri/Qiao/Walter’s system to include the technique to train the model using newly generated data which can be synthetic, real or a combination to generate training data that can be used to train a system in short period of time and with a high degree of control over variations in the training data. One would be motivated to perform such a modification on Gayathri/Qiao/Walter’s system, to use a better data set to adjust the result using customizable or defined data sets to control the results for intrusion detection (see, e.g., Kumar, paragraph [0005]). Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Gayathri in view of Qiao in view of Jian, further in view of Jones (WO2018/045067 A1). Regarding Claim 19, Gayathri/Qiao/Jian teaches the system of claim 12, Gayathri does not explicitly teach: the computer readable medium further includes instructions, which when executed by the processing resource, cause the processing resource to: receive a corroboration from an expert that the behavioral image indicates the actual insider attack; and wherein the storing the behavioral image as an actual insider attack image is based upon a combination of the indication of the actual insider attack from the insider attack classification model and the corroboration from the expert; wherein the behavior image comprises the yielded color behavior image The Jones teaches: the computer readable medium further includes instructions, which when executed by the processing resource, cause the processing resource to: receive a corroboration from an expert that the behavioral image indicates the actual insider attack; (Jones, [WO2018/045067 A1], page 38, lines 22-29, teaches that the threat analysis system updates the threat indicator database based on any real threats identified, such as those indicated by collected memory information or forensic analysis by an analyst. The analyst may confirm the existence of a real threat based on system information, thereby providing expert corroboration) storing the behavioral image as an actual insider attack image is based upon a combination of the indication of the actual insider attack from the insider attack classification model and the corroboration from the expert. (Jones, [WO2018/045067 A1], page 38, lines 13-17, teaches that a system administrator or analyst may use a threat notification report to identify files further investigation; Page 39, lines 24-25, further teaches that after forensic analysis confirms a real threat, a signature may be generated and added to the known threat indicator database). Jones does not teach: wherein the behavior image comprises the yielded color behavior image. However, Qiao teaches: wherein the behavior image comprises the yielded color behavior image (Qiao, section IV provides for generating the color image and performing malicious/benign classification based on the color image using machine learning algorithms). Given the common knowledge in the field regarding the implementation of such methods via a computer readable medium and a processing resource. It would have been obvious to a person of ordinary skill in the art before the effective filing date to have modified Gayathri/Qiao’s system to incorporate the teaching of Jones to include the technique of corroboration from an expert to verify any real threats in the system, so that the threat analysis module may identify the vulnerabilities and install the appropriate patches to ensure the system is up-to-date and to plug know vulnerabilities. One would be motivated to perform such a modification on Gayathri/Qiao’s system, to combine automated and expert-driven analyses for improving the accuracy and reliability of threat detection (see, e.g., Jones, page 24). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMIT KHADKA whose telephone number is (703)756-1440. The examiner can normally be reached Monday - Friday, 8:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey L. Nickerson can be reached at (469) 295-9235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMIT KHADKA/Examiner, Art Unit 2432 /Jeffrey Nickerson/Supervisory Patent Examiner, Art Unit 2432
Read full office action

Prosecution Timeline

Aug 16, 2022
Application Filed
Sep 09, 2024
Non-Final Rejection — §103
Nov 27, 2024
Response Filed
Mar 04, 2025
Final Rejection — §103
Jun 11, 2025
Request for Continued Examination
Jun 16, 2025
Response after Non-Final Action
Jul 14, 2025
Non-Final Rejection — §103
Oct 27, 2025
Response Filed
Feb 06, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12567042
NONFUNGIBLE TOKEN PATH SYNTHESIS WITH SOCIAL SHARING
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
20%
Grant Probability
0%
With Interview (-20.0%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 5 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month