DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim 1-16 are pending for examination.
Claim Objections
Claims 2, 8, 10 and 16 are objected to because of the following informalities:
Claims 2 and 10 recite “the first cross-domain deep learning model” in line 4 and 3, respectively. For consistency, it is suggested to recite “the cross-domain first deep learning model”.
Claim 8 and 16 recite “the layer exporting…”, for clarity, it is suggested to recite “exporting the layer…”.
Appropriate correction is required.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, 3, 4, 6, 7, 9, 11, 12, 14, and 15 are rejected on the ground of non-statutory double patenting as being unpatentable over claims 9, 13, 16, 13 and 12, respectively, of U.S. Patent No. 12069072. Although the claims at issue are not identical, they are not patentably distinct from each other because they are substantially similar in scope, recite similar limitations and they produce the same end result of creating a cross-domain deep learning model from a first deep learning model from a first domain and a second deep learning model from a second domain. Therefore, the claims of the present application are considered obvious variant of US Patent. No. 12069072.
The table below shows comparison between the instant application and the reference US Patent claims.
Instant application No. 18/774,688
Ref. US Patent No. 12069072
Claim 1. A computer-based system for creating a cross-domain deep learning model for detecting malicious network traffic in a multi-domain network, comprising:
a computing module coupled to a multi-domain network and having a processor and a memory for storing instructions;
a machine learning module, communicatively coupled to the computing module, for building a plurality of deep learning models; and
a cross-domain training module, communicatively coupled to the machine learning module and the computing module, for using cross domain training to produce cross domain models; wherein the instructions, when executed by the processor, cause the computer-based system to:
Claim 9. A computer-based system for identifying malicious or invalid network traffic in a multi-domain network, the system comprising:
a computing module having a processor and a memory for storing instructions;
a machine learning module for building a plurality of deep learning models; and
a cross-domain training module communicatively coupled to the machine learning module; wherein the instructions, when executed by the processor, cause the computing module to:
observe traffic in the network from at least a first and a second domain, the first and second domain being different from each other;
build a first deep learning model by training deep learning layers using data from the first domain;
build a second deep learning model by training deep learning layers using data from the second domain;
observe traffic in a network from at least a first and a second domain, the first and second domain being different from each other;
build a first and a second deep learning model from the first and second domains, respectively,
export at least one layer of the second deep learning model to the first deep learning model; and
update the first deep learning model by training at least one layer of the first deep learning model using data from the second deep learning model, thereby creating a cross- domain first deep learning model configured to compute a score for traffic from the first domain using at least the cross-domain first deep learning model, wherein the score indicates a likelihood of identifying the traffic from the first domain as being malicious or invalid.
combining the first and second plurality of deep machine learning embeddings in a common vector space V being created therefrom; and using the combination of first and second plurality of deep machine learning embeddings in the common vector space V to train each of the first and the second deep machine learning models; cause the cross-domain training module to update at least the first deep learning model using data imported from the second deep learning model, thereby creating a cross-domain trained model; and compute a score for the traffic using the at least one cross-domain trained model, wherein the score indicates a likelihood of identifying the traffic as being malicious or invalid.
Claim 3. The computer-based system for creating a cross-domain deep learning model for detecting malicious network traffic in a multi-domain network of claim 1, wherein the instructions, when executed by the processor, further cause the computer-based system to:
continuously update the cross-domain first deep learning model as more data is received and processed from at least one of the first domain and the second domain.
Claim 13. The computer-based system of claim 9, wherein the cross-domain trained model is continuously evaluated for performance and automatically updated.
Claim 4. The computer-based system for creating a cross-domain deep learning model for detecting malicious network traffic in a multi-domain network of claim 1, wherein said instructions for causing said computer-based system to build the first deep learning model by training deep learning layers using data from the first domain further comprise:
constructing a first plurality of deep machine learning embeddings of traffic events in the first domain; and
wherein said instructions for causing said computer-based system to build the second deep learning model by training deep learning layers using data from the second domain further comprise:
constructing a second plurality of deep machine learning embeddings of traffic events in the second domain
Claim 16. …observe traffic in the network;
compute traffic embeddings from the first and second domain using the plurality of embedding modules; compute multi-domain embeddings from the first and second domain; train a deep learning model using the multi-domain embeddings,…
Claim 6. The computer-based system for creating a cross-domain deep learning model for detecting malicious network traffic of claim 1, wherein the cross-domain first deep learning model is continuously evaluated for performance and automatically updated.
Claim 13. The computer-based system of claim 9, wherein the cross-domain trained model is continuously evaluated for performance and automatically updated.
Claim 7. The computer-based system for creating a cross-domain deep learning model for detecting malicious network traffic of claim 1, wherein each the first domain and the second domain are one of: cybersecurity data, video data, web interface interactions, web interface transactions, web advertising, mobile site advertising, advertising in streaming, and advertising in over-the-top services.
Claim 12. The computer-based system of claim 9, wherein each the first domain and the second domain are one of: cybersecurity data, video data, web interface interactions, web interface transactions, web advertising, mobile site advertising, advertising in streaming, and advertising in over-the-top services.
Regarding Claims 9, 11, 12, 14, and 15, the claim limitations are identical and/or equivalent in scope to claims 1, 3, 4, 6 and 7. Therefore, Claims 9, 11, 12, 14, and 15 of Application No. 18/774,688 correspond respectively to Claims 9, 13, 16, 13 and 12 of U.S. Patent No. 12069072.
Allowable Subject Matter
Claims 1-16 are allowable over the prior art of record, but remain subject to the double patenting rejections set forth above.
Prior Art of Record
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The examiner has cited the following references on the PTO-892:
US 2019/0356684 (Sinha et al.): Sinha teaches a robotic activity detection system (or simply “bot detection system”) generates a machine-learning model. Each domain can have a separate machine-learning model that is trained to detect and classify network sessions for the domain. The bot detection system can use the individual machine-learning models for the domains to generate a domain-agnostic machine-learning model. In particular, the bot detection system generates the domain-agnostic machine-learning model by combining the machine-learning models for domains that have high-quality network session data [¶¶ 0023, 0025].
US 2020/0175408 (Baughman et al.): Baughman teaches selecting layers from a plurality of external deep learning models; concatenating the selected layers from the plurality of external deep learning models to form a core deep learning model; training the core deep learning model; and synchronizing layers in the core deep learning model with the layers from the plurality of external deep learning models using quantum entanglement [Abstract].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD YOUSUF A MIAN whose telephone number is (571)272-9206. The examiner can normally be reached Monday-Friday 9am-5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ARIO ETIENNE can be reached at 571-272-4001. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMMAD YOUSUF A. MIAN/Examiner, Art Unit 2457
/ARIO ETIENNE/Supervisory Patent Examiner, Art Unit 2457