Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is response to the Applicant’s response, filed on 01/23/2026, to the election/restriction requirement of 11/25/2025. The Applicant elected claims 1-11 for further examination without traverse, canceled claims 12-20 and added claims 21-29.
Claims 1-11 and 21-29 are stand rejected and are pending in this Office Action. Claim 1 is independent.
Information Disclosure Statement
The information disclosure statements filed 02/24/2026 are compliant with 37 CFR 1.97(c) and therein have been considered. Its corresponding PTO-1449 have been electronically signed as attached.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 9 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over
Manoukian et al.: “DATA STREAM PROCESSING FOR DYNAMIC RESOURCE SCHEDULING” (United States Patent US 10425355 B1, DATE PUBLISHED 2019-09-24; and DATE FILED 2016-10-05, hereafter “Manoukian”), in view of
Yonghui Wu et al.: “CONTRASTIVE LEARNING AND MASKED MODELING FOR END-TO-END SELF-SUPERVISED PRE-TRAINING” (Japan Patent Application Publication JP 2024529470 A, DATE PUBLISHED 2024-08-06; and DATE FILED 2022-07-28, hereafter “Wu”).
As per claim 1, Manoukian teaches a computer implemented method of identifying one or more anomalies within a first set of data, comprising:
using a computing device, generating a graphical user interface configured to receive a request to identify one or more anomalies in the first set of data (See Fig. 7, col. 3, lines 56-57 and col. 32, lines 13-17, a user device can be configured to detect user input received at a graphical user interface of the device to receive the requested data and/or to present the received data; composites in a data element may include an indication of whether an abnormality exists in the test result or measurement. User device configured with graphical user interface teaches the interface being generated);
receiving the first set of data in response to the user's query (See col. 1, lines 55-56 and col. 3, lines 56-57, querying a lookup table using the condition composite; and detecting user input received at a user interface of the device. The user input can include, for example, an identifier of an object or entity, an instruction, a characterization of an object or entity, an identification of an assessment to be performed, a specification of an aggregation or data processing to be performed, and/or an identification of a destination for a data-analysis report);
generating a plurality of tokens representative of the first set of data (See col. 15, lines 8-9 and 10-14, associating one or more tags with the data; and the tags may have been input by users, learned, pre-defined, generated by outside third-party mapping sources, and/or gathered from other components and/or data stores of the interaction system. Here the tag reads on the token);
using a tokenizer, generating token ID sequences corresponding to the plurality of tokens (See col. 15, lines 29-32, tagging engine receives data, reads metadata associated with the data, semantically scans the content of the data, and associates one or more tags with the data and the tags may be stored in association with the data and/or stored independent from the data but include an identifier such that when searching tags the data may be capable of population. Here associating tag with data and including identifier teaches generating token ID sequences and the tagging engine reads on the tokenizer).
Manoukian does not explicitly teach using a first machine learning model, processing the token ID sequences into first latent space representations.
However, Wu teaches using a first machine learning model, processing the token ID sequences into first latent space representations (See Page 10, Paragraph 5, the input to the machine learning based model of the present disclosure may be latent coding data (e.g., a latent space representation of the input, etc.). The machine learning based model may process the latent coding data to generate an output.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Wu with the teachings of Manoukian because Manoukian is dedicated to application of particular protocols to a stream facilitate a selective, reliable and efficient processing of data elements within the data streams based on composites of the data elements and Wu is dedicated to machine learning, more specifically, to an improved end-to-end self-supervised pre-training framework that leverages a combination of contrastive loss and mask modeling loss terms, and the combined teaching would have enabled Manoukian to apply machine learning for data mining, machine-learned rules, machine-defined rules, machine-learning algorithms to predict or forecast patterns in the mined data.
Manoukian in view of Wu further teaches the following:
using the first machine learning model, associating each of the plurality of tokens with one or more portions of context data by correlating the first latent space representations with the one or more portions of context data (See Wu: Page 10, Paragraph 5, a latent space representation of the input; and Manoukian: col. 15, lines 47-50, providing meaning and/or give context to the particular record of data and the meaning and/or context may assist tagging engine 510 to determine one or more tags to associate with the data. Here in a combined teaching of determining tags to associate with the data and providing meaning and/or give context to the particular record of data),
processing the first latent space representations to identify the one or more anomalies (See Wu: Page 10, Paragraph 5, a latent space representation of the input, etc.; and Manoukian: col. 32, lines 16-17, data element may include an indication of whether an abnormality exists in the test result or measurement),
wherein the one or more anomalies is correlated with the one or more portions of context data (See Manoukian: col. 32, lines 16-17, data element may include an indication of whether an abnormality exists in the test result or measurement).
As per claim 9, Manoukian in view of Wu teaches the computer-implemented method of claim 1, wherein the first machine learning model is a at least one of a recurrent neural network or convolutional neural network and further comprising applying a contrastive learning algorithm to neuron activations of one or more hidden layers of the first machine learning model (See Wu: Page 3, a framework that combines contrastive learning and mask modeling, where the former trains a model to discretize input data (e.g., a continuous signal such as a continuous speech signal) into a finite set of differentiable tokens, and the latter trains a model to learn contextualized representations by solving a masked prediction task that consumes the discretized tokens, and the machine learning models include neural networks (e.g., deep neural networks) or nonlinear and/or linear models. The neural networks may include recurrent neural networks, convolutional neural networks, or other forms of neural networks, feed-forward neural network).
As per claim 23, Manoukian in view of Wu teaches the computer-implemented method of claim 1, further comprising performing an indexed vector search to associate the first latent space representations with latent space representations of context data (See Wu: Pages 5 and 10, representing input data by a latent space and generating a first set of context vectors; and Manoukian: col. 10, line 9, performing semantic tagging and indexing of data).
Claims 2-6 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over
Manoukian in view of Wu, as applied to claim 1 and further in view of
SJOEGREN et al.: “COMPUTER-IMPLEMENTED METHOD FOR ANOMALY DETECTION AND/OR PREDICTIVE MAINTENANCE” (China Patent CN 112655004 B, DATE PUBLISHED 2024-03-26; and DATE FILED 2019-09-05, hereafter “SJOEGREN”).
As per claim 2, Manoukian in view of Wu does not explicitly teach the computer implemented method of claim 1, wherein the first set of data comprises data related to security assessments of a plurality of networked hosts.
However, SJOEGREN teaches the computer implemented method of claim 1, wherein the first set of data comprises data related to security assessments of a plurality of networked hosts (See Page 66, determining the type of security vulnerability or that a particular user, computer, server, network or the like is considered untrustworthy. The corresponding information may be communicated to the user using an appropriate user interface (e.g., a display)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of SJOEGREN with the teachings of Manoukian in view of Wu because Manoukian is dedicated to application of particular protocols to a stream facilitate a selective, reliable and efficient processing of data elements within the data streams based on composites of the data elements, Wu is dedicated to machine learning, more specifically, to an improved end-to-end self-supervised pre-training framework that leverages a combination of contrastive loss and mask modeling loss terms and SJOEGREN is dedicated to anomaly detection and/or predictive maintenance, in particular, using anomaly value detection in structured or unstructured data, and the combined teaching would have enabled Manoukian in view of Wu to improve by reliably and timely detecting abnormalities of composites of data elements within the data streams that may damage the proper operation of the system.
As per claim 3, Manoukian in view of Wu, and further in view of SJOEGREN teaches the computer implemented method of claim 2, wherein the one or more portions of context data comprises data indicative of environments, events, topics or themes associated with at least a subset of the data related to said security assessments (See SJOEGREN: Page 78, the predictive analysis component may be configured to aggregate the collected data and model to predict when maintenance, imminent security vulnerabilities, fraudulent transactions, or users, etc.).
As per claim 4, Manoukian in view of Wu, and further in view of SJOEGREN teaches the computer implemented method of claim 2, wherein the one or more portions of context data comprises at least one of host names, host configurations, identity and access management policies, user names, user permissions, and insecure code lines (See Manoukian: col. 45, lines 45-46, the server may be queried using an identifier; and data mining conference in 2014, Volume 0, Industry and Applied Mathematics Society, 2014, page $tag1). The core density estimates maybe calculated separately for the training set of Mahalanob is distances and residual squared sum, but all layers are combined. The probability of each observed value may be approximated by Monte Carlo integration based on the resulting kernel density function. The abnormal value detection performance can be evaluated in the same manner as the above experiment; and Page 79, data mining conference in 2014, Volume 0, Industry and Applied Mathematics Society, 2014, page $tag1). The core density estimates may be calculated separately for the training set of Mahalanobis distances and residual squared sum, but all layers are combined. The probability of each observed value may be approximated by Monte Carlo integration based on the resulting kernel density function. The abnormal value detection performance can be evaluated in the same manner as the above experiment.).
As per claim 5, Manoukian in view of Wu, and further in view of SJOEGREN teaches the computer-implemented method of claim 1, wherein the first set of data is unstructured text data (See SJOEGREN: Page 21, the data to be processed by the deep neural network may be structured or unstructured data).
As per claim 6, Manoukian in view of Wu, and further in view of SJOEGREN teaches the computer-implemented method of claim 1, further comprising generating the plurality of tokens by determining a plurality of possible segmentations of the first set of data, calculating a probability of each of the segmentations and selecting one or more segmentations with highest probabilities (See: Manoukian: col. 21, lines 58-59, the priority tag may be segmented or divided and added at various positions within the data element; and SJOEGREN: Page 79, the probability of each observed value may be approximated by Monte Carlo integration based on the resulting kernel density function. The abnormal value detection performance can be evaluated in the same manner as the above experiment).
As per claim 28, Manoukian in view of Wu, and further in view of SJOEGREN teaches the computer-implemented method of claim 1, further comprising sorting local outlier factor scores corresponding to the first latent space representations to determine the one or more anomalies (See SJOEGREN: Page 26, the distance metric may be any distance metric adapted to quantize the distance from the latent variable approximation (i.e., the first set of projection values). For example, the distance may be residual squared sum (RSS), Martensitic distance, local anomaly factor or LOF (see, e.g., M. M. Breunig, H. -P. Kriegel, R. T. Ng and J. Sander, "LOF: Identiactable Density-basedLocal Outliers", ACMSIGMOD Conference on International Data Management in 2000, New York, NY, USA, 2000, page $tag1). The distance may also be an integrated distance based on an integrated distance metric formed by combining two or more of the described distances.).
Claims 7-8, 24 and 27 are rejected under 35 U.S.C. 103 as being unpatentable over
Manoukian in view of Wu, as applied to claims 1 and 23 and further in view of
Khomami Abadi et al.: “SYSTEMS AND METHODS FOR LEARNING ACROSS MULTIPLE CHEMICAL SENSING UNITS USING A MUTUAL LATENT REPRESENTATION” (U.S. Patent Application Publication US 20200272900 A1, DATE PUBLISHED 2020-08-27; and DATE FILED 2020-02-21, hereafter “Khomami Abadi”).
As per claim 7, Manoukian in view of Wu does not explicitly teach the computer-implemented method of claim 1, further comprising generating the correlation by training the first machine learning model to minimize cosine distance between the first latent space representations and one or more second latent space representations of context data.
However, Khomami Abadi teaches the computer-implemented method of claim 1, further comprising generating the correlation by training the first machine learning model to minimize cosine distance between the first latent space representations and one or more second latent space representations of context data (See [0117], t Mapping w may map {right arrow over (s.sub.1)} and {right arrow over (s.sub.2)} to {right arrow over (q.sub.1)}, {right arrow over (q.sub.2)}∈Ω such that an angle between {right arrow over (q.sub.1)} and {right arrow over (q.sub.2)} is maximized. For example, Ω may be determined such that cosine distance d ({right arrow over (q.sub.1)}, {right arrow over (q.sub.2)}) is minimized. When the manifolds corresponding to the different analytes in ψ are already orthogonal, ω may reduce to the identity function. As discussed herein, ω may be expressed as a parametric function of some parameter set Y: {right arrow over (q)}=ω({right arrow over (s)}; Y). Values for parameter set Y may be estimated using machine learning techniques, such as feedforward neural networks and Gaussian processes.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Khomami Abadi with the teachings of Manoukian in view of Wu because Manoukian is dedicated to application of particular protocols to a stream facilitate a selective, reliable and efficient processing of data elements within the data streams based on composites of the data elements, Wu is dedicated to machine learning, more specifically, to an improved end-to-end self-supervised pre-training framework that leverages a combination of contrastive loss and mask modeling loss terms and Khomami Abadi is dedicated to training models across multiple sensing units in a chemical sensing system, and the combined teaching would have enabled Manoukian in view of Wu to train a set of models to relate the first values and the second values to a mutual latent representation using the training dataset.
As per claim 8, Manoukian in view of Wu, and further in view of Khomami Abadi teaches the computer-implemented method of claim 7, further comprising generating the correlation by maximizing orthogonality of the first latent space representations and unrelated context data (See Khomami Abadi: [0145] Additionally or alternatively, a (reference) mutual latent space Φ.sup.S that has an isomorphism to each of Φ.sup.1, Φ.sup.2, . . . , Φ.sup.n spaces may be considered and the shared map may optimize the presentation of a global map Φ.sup.S, for example with respect to minimal redundancy (e.g., same cardinality with Φ.sup.i) and maximal relevance for a particular application).
As per claim 24, Manoukian in view of Wu, and further in view of Khomami Abadi teaches the computer-implemented method of claim 23, further comprising performing the indexed vector search by using a data structure configured to accelerate nearest neighbor searches using cosine distance (See Wu: Pages 5 and 10, representing input data by a latent space and generating a first set of context vectors; Manoukian: col. 10, line 9, performing semantic tagging and indexing of data; and Khomami Abadi: [0101] and [0117], the composition and training of the models and/or sub-models are trained using non-parametric techniques, such as t-distributed stochastic neighbor embedding (t-SNE), uniform manifold approximation and projection (UMAP), or k-nearest neighbors techniques (k-NN); and the t Mapping w may map {right arrow over (s.sub.1)} and {right arrow over (s.sub.2)} to {right arrow over (q.sub.1)}, {right arrow over (q.sub.2)}∈Ω such that an angle between {right arrow over (q.sub.1)} and {right arrow over (q.sub.2)} is maximized. For example, Ω may be determined such that cosine distance d ({right arrow over (q.sub.1)}, {right arrow over (q.sub.2)}) is minimized. When the manifolds corresponding to the different analytes in ψ are already orthogonal, ω may reduce to the identity function. As discussed herein, ω may be expressed as a parametric function of some parameter set Y: {right arrow over (q)}=ω({right arrow over (s)}; Y). Values for parameter set Y may be estimated using machine learning techniques, such as feedforward neural networks and Gaussian processes).
As per claim 27, Manoukian in view of Wu, and further in view of Khomami Abadi teaches the computer-implemented method of claim 1, wherein determining the one or more anomalies comprises computing local reachability density values for the first latent space representations (See Wu: Pages 5 and 10, representing input data by a latent space and generating a first set of context vectors; Manoukian: col. 10, line 9, performing semantic tagging and indexing of data; and Khomami Abadi: [0101] and [0117], the composition and training of the models and/or sub-models are trained using non-parametric techniques, such as t-distributed stochastic neighbor embedding (t-SNE), uniform manifold approximation and projection (UMAP), or k-nearest neighbors techniques (k-NN); and the t Mapping w may map {right arrow over (s.sub.1)} and {right arrow over (s.sub.2)} to {right arrow over (q.sub.1)}, {right arrow over (q.sub.2)}∈Ω such that an angle between {right arrow over (q.sub.1)} and {right arrow over (q.sub.2)} is maximized. For example, Ω may be determined such that cosine distance d ({right arrow over (q.sub.1)}, {right arrow over (q.sub.2)}) is minimized. When the manifolds corresponding to the different analytes in ψ are already orthogonal, ω may reduce to the identity function. As discussed herein, ω may be expressed as a parametric function of some parameter set Y: {right arrow over (q)}=ω({right arrow over (s)}; Y). Values for parameter set Y may be estimated using machine learning techniques, such as feedforward neural networks and Gaussian processes).
Claims 10-11 and 25-26 are rejected under 35 U.S.C. 103 as being unpatentable over
Manoukian in view of Wu, as applied to claim 1 and further in view of
DUPPILS et al.: “A METHOD FOR IDENTIFYING VULNERABILITIES IN COMPUTER PROGRAM CODE AND A SYSTEM THEREOF” (WIPO Patent Application Publication WO 2021148625 A1, DATE PUBLISHED 2021-07-29; and DATE FILED 2021-01-22, hereafter “DUPPILS”).
As per claim 10, Manoukian in view of Wu does not explicitly teach the computer-implemented method of claim 1, further comprising, using the tokenizer, applying a unigram language model.
However, DUPPILS teaches the computer-implemented method of claim 1, further comprising, using the tokenizer, applying a unigram language model (See Page 10, lines 20-25, most combinations of words do not form an acceptable sentence. There are many ways of building a language model for word representations. In this disclosure. Here the word reads on the unigram).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of DUPPILS with the teachings of Manoukian in view of Wu because Manoukian is dedicated to application of particular protocols to a stream facilitate a selective, reliable and efficient processing of data elements within the data streams based on composites of the data elements, Wu is dedicated to machine learning, more specifically, to an improved end-to-end self-supervised pre-training framework that leverages a combination of contrastive loss and mask modeling loss terms and DUPPILS is dedicated to identifying vulnerabilities in computer program code, and the combined teaching would have enabled Manoukian in view of Wu to use of machine learning in the computer security domain to alleviate the great cost of human resources in monitoring open-source projects for potential vulnerabilities.
As per claim 11, Manoukian in view of Wu, and further in view of DUPPILS teaches the computer-implemented method of claim 1, further comprising identifying one or more character sequences indicative of inconsistent portions of the first set of data using a classifier model and applying an integrated gradients algorithm to neuron activations of one or more hidden layers of the classifier model (See DUPPILS: Page 21, lines 4-9, and Page 38, lines 10-13, the Adaptive Gradient algorithm (AdaGrad) [19] has the learning rate adjusted for each parameter. Infrequent parameters have a higher learning rate for more substantial updates. Frequent parameters instead have lower learning rate, leading to smaller updates but more frequent iteration. This method achieves good performance on sparse gradients such as nip tasks; and the number of trainable features in the model in total is slightly below 1 00k with a training set of size slightly above 1 00k. When there is less training data than features in a model, the model may not able to learn the optimal hidden states).
As per claim 25, Manoukian in view of Wu, and further in view of DUPPILS teaches the computer-implemented method of claim 1, wherein processing the token ID sequences into the first latent space representations comprises applying an activation function to generate sparse vector outputs (See Wu: Pages 5 and 10, representing input data by a latent space and generating a first set of context vectors; and Manoukian: col. 10, line 9, performing semantic tagging and indexing of data and DUPPILS: Page 14, Latent Semantic Analysis (LSA) is an NLP technique with the purpose of analyzing text documents and extracting useful data. The technique first uses term weights, in this case they have been calculated as a sparse tf-idf matrix of word weights.).
As per claim 26, Manoukian in view of Wu, and further in view of DUPPILS teaches the computer-implemented method of claim 1, further comprising training the first machine learning model using paired first training data-objects comprising training security assessment data and second training data-objects comprising associated training context data (See DUPPILS: Page 5, a security related example of data in training set and a non-security example of data in training set; and Wu: Page 6, a mask modeling pre-training output generated based on the second set of context vectors 36 and the plurality of discretized identifiers).
Claims 21-22 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over
Manoukian in view of Wu, as applied to claim 1 and further in view of
LIBBEY; DAVID: “GENERATION OF GRAMMAR-COMPLIANT PROGRAMMING LANGUAGE CODE USING MACHINE LEARNING” (U.S. Patent Application Publication US 20250165228 A1, DATE PUBLISHED 2025-05-22; and DATE FILED 2023-11-17, hereafter “LIBBEY”).
As per claim 21, Manoukian in view of Wu does not explicitly teach the computer-implemented method of claim 1, wherein generating the plurality of tokens comprises iteratively estimating token probabilities, calculating loss, and reducing vocabulary size to generate words, sub-words, or tokens.
However, LIBBEY teaches the computer-implemented method of claim 1, wherein generating the plurality of tokens comprises iteratively estimating token probabilities, calculating loss, and reducing vocabulary size to generate words, sub-words, or tokens (See [0007] and [0108], generating the mask by, for each token not in the set of valid next tokens, generating a corresponding masking value that, when applied, reduces or zeros the probability of the token being the next token, and the generative language model generate a new token, then check that new token, and if it is also not grammar-compliant then repeat the process again in an iterative manner until a token is output from the generative language model that is determined to be grammar-compliant.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of LIBBEY with the teachings of Manoukian in view of Wu because Manoukian is dedicated to application of particular protocols to a stream facilitate a selective, reliable and efficient processing of data elements within the data streams based on composites of the data elements, Wu is dedicated to machine learning, more specifically, to an improved end-to-end self-supervised pre-training framework that leverages a combination of contrastive loss and mask modeling loss terms and LIBBEY is dedicated to generating programming language code using a generative language model, and the combined teaching would have enabled Manoukian in view of Wu to apply generative language model to iteratively generate more accurate tokens.
As per claim 22, Manoukian in view of Wu, and further in view of LIBBEY teaches the computer-implemented method of claim 1, further comprising generating token probabilities using the tokenizer by applying a statistical unigram language model configured to assume independence of word occurrences (See LIBBEY: [0007], [0089] and [0108], generating the mask by, for each token not in the set of valid next tokens, generating a corresponding masking value that, when applied, reduces or zeros the probability of the token being the next token, and the generative language model generate a new token, then check that new token, and if it is also not grammar-compliant then repeat the process again in an iterative manner until a token is output from the generative language model that is determined to be grammar-compliant, a plurality of values are generated using a generative language model and each of these values in the tensor 520 represents an unnormalized probability that a respective token corresponding to the value is the next token given on or more previously generated tokens of the sequence. Another example of the plurality of values is the vector 528 output from the softmax function 514. Each of these values in the vector 528 represents a normalized probability (i.e. between 0 and 1) that a respective token corresponding to the value is the next token given on or more previously generated tokens of the sequence.).
As per claim 29, Manoukian in view of Wu, and further in view of SJOEGREN teaches the computer-implemented method of claim 1, further comprising mapping each of the first latent space representations to a unique identifier stored in the search index to retrieve corresponding original values (See Wu: Page 10, the machine learning based model of the present disclosure may be latent coding data (e.g., a latent space representation of the input, etc.); and Manoukian: col. 15, lines 21-28, tagging engine 510 may have access to other data to compare the analyzed metadata against (e.g., to identify that the author's name corresponds to Dr. Brown who is an oncologist). Other examples, of metadata that may be included in one or more fields include author, document type, creation time and date, last update time and date, upload time and data, geographic location, unique ID associated with the client or facility where the data originated, and other similar fields).
Related Prior Arts
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure can be found in the PTO-892 Notice of Reference Cited.
Conclusion
Examiner has cited particular columns and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. SEE MPEP 2141.02 [R-5] VI. PRIOR ART MUST BE CONSIDERED IN ITS ENTIRETY, INCLUDING DISCLOSURES THAT TEACH AWAY FROM THE CLAIMS: A prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed invention. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert. denied, 469 U.S. 851 (1984) In re Fulton, 391 F.3d 1195, 1201, 73 USPQ2d 1141, 1146 (Fed. Cir. 2004). >See also MPEP §2123.
In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KUEN S LU whose telephone number is (571)272-4114. The examiner can normally be reached on M-F, 8-19, Mid-Flex 2 hours.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mr. Ajay Bhatia can be reached on 5712723906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
KUEN S LU /Kuen S Lu/
Art Unit 2156
Primary Patent Examiner
March 11, 2026