Prosecution Insights
Last updated: April 19, 2026
Application No. 18/481,456

METHOD AND SYSTEM FOR ZERO DAY MALWARE SIMILARITY DETECTION

Non-Final OA §103
Filed
Oct 05, 2023
Examiner
JEUDY, JOSNEL
Art Unit
2438
Tech Center
2400 — Computer Networks
Assignee
Blackberry Limited
OA Round
3 (Non-Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
67%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
659 granted / 788 resolved
+25.6% vs TC avg
Minimal -17% lift
Without
With
+-16.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
21 currently pending
Career history
809
Total Applications
across all art units

Statute-Specific Performance

§101
19.1%
-20.9% vs TC avg
§103
49.0%
+9.0% vs TC avg
§102
6.8%
-33.2% vs TC avg
§112
8.9%
-31.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 788 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 19, 2055 has been entered. Response to arguments Claims 1, 11 and 21 have been amended. Claims 2-3, 12-13 have been cancelled. Therefore, claims 1, 4-11 and 14-21 are pending. Claims 1, 4-11 and 14-21 are rejected under over SEIFERT, US 20210141897 A1 (IDS Submitted, 02/21/2025) in view of Miserendino, US 20170262633 A1in further view of Ducau, US pat. No 20230342462. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4-5, 7-10, 11, 14-15 and 17-21 are rejected under 35 U.S.C 103 as being unpatentable SEIFERT, US 20210141897 A1 (IDS Submitted, 02/21/2025) in view of Miserendino, US 20170262633 A1. 1. SEIFERT discloses a method at a computing device (See SEIFERT, abstract; determine a particular output classification corresponding to malicious behavior.) comprising: fragmenting a malware sample into a plurality of byte strings, each of the plurality of byte strings having predetermined length, the predetermined length being based on at least one of an amount of functionality to be modeled, or an input size of an embedding network trained on triplet pairs derived from malware and benignware to ensure a differentiation in byte string functionality; (See SEIFERT, [0055]; NB, this mapping covers the first part of the clause, examiner is not required to map the second part of the clause. Each malware sample may generate thousands of raw unpacked file strings or API call events (functionality) and their parameters. Because particular embodiments detect polymorphic malware (malware that consistently changes its identifiable features in order to evade detection) in addition to non-polymorphic malware, some embodiments do not encode potential features directly as sparse binary features (typically zero values that are binary). For instance, if each variant in a family drops or removes a second, temporary file with a partially random name or contacts a command and control (C&C) server with a partially random URL, in some embodiments, the file name or URL is not represented explicitly. Instead, some embodiments encode the raw unpacked file strings and the API calls and their parameters as a collection of N-Grams of characters. In some embodiments, trigrams (i.e., N-Grams where N=3) of characters for all values are used. See [0044] For instance, the raw information may be unpacked file strings (or strings where function calls are made to unpack file strings) and API calls with their associated parameters. This is because malicious content is often packed, compressed, or encrypted, and so calls to decrypt or otherwise unpack the data may be desirable. Regarding API calls, certain malicious content may have a specific API call pattern so this information may also be desirable to extract. See also [0059], fig 6 and [0123]) embedding each of the plurality of byte strings in [[an]] the embedding network to generate a plurality of embeddings; (See SEIFERT, [0064] When testing files or other content, some embodiments input known malware files or content to the left-hand side (the training component 340), and new or unknown files or content (e.g., the testing files) are then evaluated using the right-hand side (evaluation component 350). The testing files or content may represent new or different files or content that a model has not yet trained on. Thus during testing, the model's output may represent the cosine distance between the embedding or vector of the known malware content for the left-hand side and the embedding or vector of the malicious or benign file on the right-hand side according to some embodiments. For example, a first set of files in the unknown files 315 may first be subject to file emulation 333 (which may be the same functionality performed by file emulation 303 and performed by the emulator 203), such that the first set of files are emulated and particular information is extracted, such as API calls and packed strings, which are then unpacked. Then features can be selected from the first set of files per 335 (e.g., via the same or similar functionality as described with respect to feature selection 305 or via the feature selector 205). Responsively, unknown pair construction 320 can be performed (e.g., via the same or similar functionality with respect to the training set construction 307 or via the unknown construction component 220) such that similar malicious test files are grouped together and any other benign test files are grouped together. This functionality may be the same functionality as the training set construction 307, except that the files are not training data, but test data to test the accuracy or performance of the model. Responsively, in some embodiments, unknown pair evaluation 311 is done for each file in the first set of files. This evaluation may be done using the model training 309 of the training component 340. For example, a first test file of the first set of files can be converted into a first vector and mapped in feature space, and a similarity score can be determined between the first vector and one or more other vectors represented in the labeled files 313 (i.e., other malware files represented as vectors) by determining the distances between the vectors in feature space. In various embodiments, an indication of the similarity score results is then output via the unknown file predictions 317.) for each embedding in the plurality of embeddings, finding a nearest neighbor; (See SEIFERT, [0070] Some embodiments alternatively determine similarity scores or otherwise detect if an unknown file is malicious by replacing the cosine( ) distance with an optional K-Nearest Neighbor (KNN) classifier and assign the unknown file to the voted majority malware family or benign class of the K known files with the highest similarity score(s). Assigning the label of the single closest file (K=1) may perform well. Accordingly, some embodiments only need to find a single file (e.g., as stored to the labeled files 313) that is most similar to an unknown file (in the unknown files 315). ) and setting a predicted family for the malware sample based on a fusion of the nearest neighbor for each of the plurality of embeddings; (See SEIFERT, [0065] In some embodiments, the evaluation component 350 illustrates how files are evaluated or predicted after data has been trained and tested, or a model is otherwise deployed in a particular application. For example, after a model has been both trained and tested, a model can be deployed in a web application or other application. Accordingly, a user, for example, may upload a particular file (e.g., an unknown file 315) to a particular web application during a session in order to request prediction results as to whether the particular file is likely associated with malware or belongs to a particular malware family. Accordingly, all of the processes described with respect to the evaluation component 350 can be performed at runtime responsive to the request such that unknown file predictions 317 may indicate whether the file uploaded by the user is associated with malware. As illustrated in the system 300, such prediction can be based on the model training 309 (and/or testing) of other files.) ) wherein when the nearest neighbor for an embedding is outside a predetermined threshold, classifying the embedding as unknown; (See SEIFERT, [0064], fig 7 [0070] Some embodiments alternatively determine similarity scores or otherwise detect if an unknown file is malicious by replacing the cosine ( ) distance with an optional K-Nearest Neighbor (KNN) classifier and assign the unknown file to the voted majority malware family or benign class of the K known files with the highest similarity score(s). Assigning the label of the single closest file (K=1) may perform well. Accordingly, some embodiments only need to find a single file (e.g., as stored to the labeled files 313) that is most similar to an unknown file (in the unknown files 315). ) SEIFERT does not appear to explicitly disclose and wherein, when the fusion comprises an unknown family, flagging the malware sample as a zeroday sample, the flagging causing an action to be performed on the malware sample. However, Miserendino discloses and wherein, when the fusion comprises an unknown family, flagging the malware sample as a zeroday sample, the flagging causing an action to be performed on the malware sample. (See Miserendino, [0117] CEP 806 outputs security event threat information and mitigation instructions to mitigation component 810. In embodiments, mitigation component utilizes border-gateway protocol (BGP) messaging to mitigate determined security events and the effects thereof. CEP 806 may configure mitigation efforts and instructions for mitigation component 810 based on reputation scores and threat levels that it determines. Mitigation component 810 takes appropriate mitigation actions based on this information and instructions. For example, mitigation component may instruct router 802 to block all files and other access from identified source. See [0052] The feature vector is an ordered list of ones and zeros indicating either the presence, or absence, of an n-gram within the file's binary representation. An embodiment of the training 204 may then use supervised machine-learning algorithms to train ada-boosted J48 decision trees on the training set. Experimentally it has been found that the predictive capacity of these category specific classifiers is greatly enhanced when operating on files of their own category, at the cost of substantial degradation when operating on files of a different category. [0142-0143] With reference to FIG. 3 discussed above, embodiments of system and method for zero-day malware detection generates a classification label 327 for each sample file. In addition to the classification label 327, embodiments of system and method for zero-day malware detection may also generate a confidence score between zero percent (0%) and one-hundred percent (100%) and include the confidence score in classification label 327. In embodiments, classification label 327 includes two labels (benign or malicious). In embodiments, confidence scores are remapped such that the output is always relative to a “malicious” determination. Hence, a 0% score is equivalent to being 100% confident that the sample was benign, a 100% score is equivalent to a 100% confidence that a sample is malicious, and a 50% score is equal certainty between the two classes. [0143] To make a final decision as to whether to deem a sample as benign or malicious embodiments of the system and method allows users to adjust a decision threshold where samples receiving scores above the threshold are marked malicious. This gives the user the ability to dynamically tune the trade-off between system false positives and false negatives. The typical method of providing a default threshold is to either use the natural uncertainty point used by the learner in training (i.e., 50%) or to select an optimal threshold using a measure of quality for binary classification that is a function of the decision threshold (e.g., the Matthews correlation coefficient, F1 score, or Cohen's kappa) as measured against a verification or test set of samples not used for training.) SEIFERT and Miserendino are analogous art because they are from the same field of endeavor which is Intrusion detection. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of SEIFERT with the teaching of Miserendino to include the zero day detection because it would have allowed automated machine-learning, zero-day malware detection. 4. The combination of SEIFERT and Miserendino discloses the method of claim 1, further comprising: prior to fragmenting the malware sample, performing a first training on the embedding network, (See SEIFERT, [0004]) the first training comprising: randomly choosing two malware samples from a malware family; (See SEIFERT, [0062]; embodiments of the training set construction 307 ensure that the file pairs in the training and test sets are unique or different. To do so, a first set of malware files may be randomly selected for the training and test sets followed by a pair of files (e.g., of the first set) that include a malware file and a benign file. Responsively, a second similar malware file pair is selected. If either of the files in the second pair match one of the files in the first set, this second malware pair is added to the training set. If it is not in the training set, embodiments add it to the test set. Similarly, in some embodiments, this same process can be done for a second dissimilar malware and benign pair such that the second dissimilar malware and benign pair are compared against the first set. Particular embodiments continue to do this procedure of randomly selecting malware pairs and adding them to training or test sets until each is complete.) randomly choosing a benignware sample from a corpus of benignware; (See SEIFERT, [0062] ) fragmenting each of the two malware samples and the benignware sample into byte strings of the predetermined length; (SEIFERT, [0055] In some embodiments, file emulation 303 includes low-level feature encoding. Each malware sample may generate thousands of raw unpacked file strings or API call events and their parameters. Because particular embodiments detect polymorphic malware (malware that consistently changes its identifiable features in order to evade detection) in addition to non-polymorphic malware, some embodiments do not encode potential features directly as sparse binary features (typically zero values that are binary). For instance, if each variant in a family drops or removes a second, temporary file with a partially random name or contacts a command and control (C&C) server with a partially random URL, in some embodiments, the file name or URL is not represented explicitly. Instead, some embodiments encode the raw unpacked file strings and the API calls and their parameters as a collection of N-Grams of characters. In some embodiments, trigrams (i.e., N-Grams where N=3) of characters for all values are used.) creating triplet pairs of an anchor sample and positive sample from the byte strings from the two malware samples and a negative sample from the byte strings from the benignware sample; (See SEIFERT, [0075] ) and training the embedding network for triplet loss based on the triplet pairs. (See SEIFERT, [0047], [0075]) 5. The combination of SEIFERT and Miserendino discloses the method of claim 4, wherein the two malware samples and the benignware sample are raw executables. (See SEIFERT,[ 0026] See also Miserendino [0041], [0045]; may include a machine-learning trainer 110 that executes known malware executables in the sandbox environment and observes execution state data as features to identify precursors and alerting signals of change in software behavior 115 for the malware. The machine-learning trainer 110 may be the same machine-learning trainer described above and may do this without supervision. An embodiment may incorporate the identified precursors and alerting signals into classifiers, while also using a graphical, dynamic visualization of the execution state of malware. When used to detect malware, an embodiment may then execute unknown files in the same or similar sandbox environment 112 or in a live running system (that is not a sandbox). The embodiment may (1) analyze the system calls and/or execution traces generated by the execution of the unknown file to determine what malware features are present in the system calls and/or execution traces and (2) compare the execution state data (e.g., a graphical, dynamic visualization 114 of the execution state data) of the unknown file to the stored precursors and alerting signals to determine if there are likely matches to the execution state data of known malware from the training repository. An embodiment may perform this comparison by comparing dynamic, graphical visualizations 114 of the execution state data of the unknown file and known malware. If such comparisons show similarities or matches, this fact may be used to provide greater confidence that the unknown file is malign.) SEIFERT and Miserendino are analogous art because they are from the same field of endeavor which is Intrusion detection. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of SEIFERT with the teaching of Miserendino to include the zero-day detection because it would have allowed automated machine-learning, zero-day malware detection. 7. The combination of SEIFERT and Miserendino discloses the method of claim 4, further comprising using a gym environment to perform a second training, the second training comprising: creating a training set by: randomly choosing a plurality of first malware samples from a corpus of malware samples; (See SEIFERT, [0062]; embodiments of the training set construction 307 ensure that the file pairs in the training and test sets are unique or different. To do so, a first set of malware files may be randomly selected for the training and test sets followed by a pair of files (e.g., of the first set) that include a malware file and a benign file. Responsively, a second similar malware file pair is selected. If either of the files in the second pair match one of the files in the first set, this second malware pair is added to the training set. If it is not in the training set, embodiments add it to the test set. Similarly, in some embodiments, this same process can be done for a second dissimilar malware and benign pair such that the second dissimilar malware and benign pair are compared against the first set. Particular embodiments continue to do this procedure of randomly selecting malware pairs and adding them to training or test sets until each is complete.) fragmenting each of the plurality of first malware samples into a plurality of training byte strings, each of the plurality of training byte strings having a predetermined length; (See SEIFERT, [0055] In some embodiments, file emulation 303 includes low-level feature encoding. Each malware sample may generate thousands of raw unpacked file strings or API call events and their parameters. Because particular embodiments detect polymorphic malware (malware that consistently changes its identifiable features in order to evade detection) in addition to non-polymorphic malware, some embodiments do not encode potential features directly as sparse binary features (typically zero values that are binary). For instance, if each variant in a family drop or removes a second, temporary file with a partially random name or contacts a command and control (C&C) server with a partially random URL, in some embodiments, the file name or URL is not represented explicitly. Instead, some embodiments encode the raw unpacked file strings and the API calls and their parameters as a collection of N-Grams of characters. In some embodiments, trigrams (i.e., N-Grams where N=3) of characters for all values are used.) and organizing the plurality of training byte strings into the training set; (See Miserendino, [0052] The second phase commences once the space of training files is partitioned 202 into appropriate categories. Individual category-specific classifiers are then trained to distinguish between benign and malicious software within the corresponding category (block 204). In our case, embodiments gather a collection of training files of known class (benign or malicious), all from the same category of file (as determined by partitioning 202), which are used to train (or construct) 204 a training set for the category specific classifier as described in the following: the collection of files in each category undergoes n-gram feature selection and extraction analysis techniques, as discussed above, to construct binary feature vector representations of each file. Feature selection comprises evaluating features of all the files in the category to identify a subset of those that are the most effective at distinguishing between benign and malicious files. An embodiment of the training 204 uses information gain techniques to evaluate these features. As mentioned above, the features are n-grams, ordered sequence of entities (grams) of length n and a gram is a byte of binary data. The feature vector is an ordered list of ones and zeros indicating either the presence, or absence, of an n-gram within the file's binary representation. An embodiment of the training 204 may then use supervised machine-learning algorithms to train ada-boosted J48 decision trees on the training set. Experimentally it has been found that the predictive capacity of these category specific classifiers is greatly enhanced when operating on files of their own category, at the cost of substantial degradation when operating on files of a different category. ) creating a support set by: randomly choosing a plurality of second malware samples from a corpus of malware samples; (See Miserendino, [0139] The enhanced feature selection module may allow users to configure the use of a feature selection algorithm implemented by selection module and, hence, feature selection and extraction 1004. Upon receipt of feature selection configuration input (e.g., from user) (block 1003), feature extraction 1004 may be implemented by enhanced feature selection module executing algorithms chosen from algorithms including random selection, entropy-gain, minimum Redundancy Maximum Relevance (mRMR), and a novel entropy-mask feature exclusion algorithm described below. In addition to adding new algorithms for feature seleciton, the feature selection module also allows users to chain feature extraction approaches and/or algorithms to create multi-stage feature selection 1004 using a hybrid approach. ) fragmenting each of the plurality of second malware samples into a plurality of support byte strings, each of the plurality of support byte strings having a predetermined length; (See SEIFERT, [0055]) and organizing the plurality of support byte strings into the support set; (See Miserendino [0075] As noted above, the EFVG provides a consistent framework to take any combination of attributes from a variety of attribute classes to construct an extended feature vector. Embodiments reorganize each file (or data structure) internally into “feature-type”—“set of attributes” key-value pairs, and stores the method for deriving the features for the attributes corresponding to a given attribute class in the EFVG. FIG. 5 provides an illustration of this process.) choosing a batch of the training set; (See ) using the support set and batch of the training set to update the embedding network by establishing a reward for batch matching for the entire batch of the training set. (See SEIFERT [0123] Experimental Setup. The SNN was implemented and trained using the Py Torch deep learning framework. The deep learning results were computed on an NVidia P100 GPU (Graphics Processing Unit). The Jaccard Index computations were also implemented in Python. For learning, the minibatch size was set to 256, and the learning rate was set to 0.01. In some embodiments, the network architecture parameters are illustrated as shown in FIG. 5.) SEIFERT and Miserendino are analogous art because they are from the same field of endeavor which is Intrusion detection. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of SEIFERT with the teaching of Miserendino to include the zero-day detection because it would have allowed automated machine-learning, zero-day malware detection. 8. The combination of SEIFERT and Miserendino discloses the method of claim 7, wherein the support set and batch of the training set are embedded using the embedding network. (See SEIFERT [0123] Experimental Setup. The SNN was implemented and trained using the Py Torch deep learning framework. The deep learning results were computed on an NVidia P100 GPU (Graphics Processing Unit). The Jaccard Index computations were also implemented in Python. For learning, the minibatch size was set to 256, and the learning rate was set to 0.01. In some embodiments, the network architecture parameters are illustrated as shown in FIG. 5.) 9. The combination of SEIFERT and Miserendino discloses the method of claim 8, wherein the establishing the reward comprises performing a batch neighbor search for the batch of the training set. (See SEIFERT, [0028] Existing technologies have various functionality shortcomings leading to lower prediction accuracy and higher error rate, among other things. A key component of some malware detection technologies is determining similar content in a high-dimensional input space. For example, instance-based malware classifiers such as the K-Nearest Neighbor (KNN) classifier rely on the similarity score or distance between two files. The K-Nearest Neighbor classifier may be an optimal classifier in certain situations given an infinite amount of training data. Malware clustering, which identifies groups of malicious content may also rely on computing a similarity score between sets of content. ) 10. The combination of SEIFERT and Miserendino discloses the method of claim 9, wherein the establishing the reward further comprising repeating batch neighbor search over the entire training set. (See [0028] Existing technologies have various functionality shortcomings leading to lower prediction accuracy and higher error rate, among other things. A key component of some malware detection technologies is determining similar content in a high-dimensional input space. For example, instance-based malware classifiers such as the K-Nearest Neighbor (KNN) classifier rely on the similarity score or distance between two files. The K-Nearest Neighbor classifier may be an optimal classifier in certain situations given an infinite amount of training data. Malware clustering, which identifies groups of malicious content may also rely on computing a similarity score between sets of content. ) 11. As to claim 11, the claim is rejected under the same rationale as claim 1. See the rejection of claim 1 above. a processor; and memory, (See SEIFERT, [0022]) wherein the computing device is configured to: 14. As to claim 14, the claim is rejected under the same rationale as claim 4. See the rejection of claim 4 above. 15. As to claim 15, the claim is rejected under the same rationale as claim 5. See the rejection of claim 5 above. 17. As to claim 17, the claim is rejected under the same rationale as claim 7. See the rejection of claim 7 above. 18. As to claim 18, the claim is rejected under the same rationale as claim 8. See the rejection of claim 8 above. 19. As to claim 19, the claim is rejected under the same rationale as claim 9. See the rejection of claim 9 above. 20. As to claim 20, the claim is rejected under the same rationale as claim 10. See the rejection of claim 10 above. 21. As to claim 21, the claim is rejected under the same rationale as claim 1. See the rejection of claim 1 above. Claims 6 and 16 are rejected under 35 U.S.C 103 as being unpatentable over SEIFERT, US 20210141897 A1 (IDS Submitted, 02/21/2025) in view of Miserendino, US 20170262633 A1in further view of Ducau, US pat. No 20230342462. 6. The combination of SEIFERT and Miserendino does not appear to explicitly disclose the method of claim 4, wherein triplet loss is calculated based on L=.Math.i=1b[.Math.f⁡(xia)-f⁡(xip).Math.22-.Math.f⁡(xia)-f⁡(xin).Math.22+α]+ where, L is the loss, x.sup.a is the anchor sample, x.sup.p is the positive sample, x.sup.n is the negative sample, b is the length of a batch of triplet pairs, and a is a margin enforced between positive and negative pairs. However, Ducau discloses wherein triplet loss is calculated based on L=.Math.i=1b[.Math.f⁡(xia)-f⁡(xip).Math.22-.Math.f⁡(xia)-f⁡(xin).Math.22+α]+ where, L is the loss, x.sup.a is the anchor sample, x.sup.p is the positive sample, x.sup.n is the negative sample, b is the length of a batch of triplet pairs, and a is a margin enforced between positive and negative pairs. (See Ducau, [ 0109-0111]) SEIFERT and Miserendino and Ducau are analogous art because they are from the same field of endeavor which is intrusion detection. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of SEIFERT and Miserendino with the teaching of Ducau to include the computation formula because it would have used in calculating the triplet pairs. 16. As to claim 16, the claim is rejected under the same rationale as claim 6. See the rejection of claim 6 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. NISSIM NIR, WO 2022101909 A1, title “ METHODS AND SYSTEMS FOR TRUSTED UNKNOWN MALWARE DETECTION AND CLASSIFICATION IN LINUX CLOUD ENVIRONMENTS.” Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSNEL JEUDY whose telephone number is (571)270-7476. The examiner can normally be reached M-F 10:00-8:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arani T Taghi can be reached at (571)272-3787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Date: 2/7/2026 /JOSNEL JEUDY/Primary Examiner, Art Unit 2438
Read full office action

Prosecution Timeline

Oct 05, 2023
Application Filed
May 16, 2025
Non-Final Rejection — §103
Jul 15, 2025
Response Filed
Sep 27, 2025
Final Rejection — §103
Nov 28, 2025
Response after Non-Final Action
Dec 19, 2025
Request for Continued Examination
Jan 22, 2026
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602352
UNIVERSAL DATA SCAFFOLD BASED DATA MANAGEMENT PLATFORM
2y 5m to grant Granted Apr 14, 2026
Patent 12591709
SYSTEMS AND METHODS FOR FUNCTIONALLY SEPARATING GEOSPATIAL INFORMATION FOR LAWFUL AND TRUSTWORTHY ANALYTICS, ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
2y 5m to grant Granted Mar 31, 2026
Patent 12585744
Method for Performing Biometric Feature Authentication When Multiple Application Interfaces are Simultaneously Displayed
2y 5m to grant Granted Mar 24, 2026
Patent 12579264
CYBER THREAT INFORMATION PROCESSING APPARATUS, CYBER THREAT INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM STORING CYBER THREAT INFORMATION PROCESSING PROGRAM
2y 5m to grant Granted Mar 17, 2026
Patent 12566727
UNIVERSAL DATA SCAFFOLD BASED DATA MANAGEMENT PLATFORM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
67%
With Interview (-16.9%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 788 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month