Prosecution Insights
Last updated: April 19, 2026
Application No. 17/814,005

DATA SLICING FOR INTERNET ASSET ATTRIBUTION

Final Rejection §101§103
Filed
Jul 21, 2022
Examiner
PHAKOUSONH, DARAVANH
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Palo Alto Networks Inc.
OA Round
2 (Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
1 granted / 2 resolved
-5.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
33 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
31.2%
-8.8% vs TC avg
§103
38.1%
-1.9% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment/Argument 1. Applicant’s Argument to the rejection under 35 U.S.C. 101 filed on October 27, 2025 have been fully considered but are not persuasive. Applicant first contends that the claims are directed to a “specifically recited improvement to an asset attribution model on data slices,” and that the claimed sequence of steps for selecting asset subsets, evaluating attribution accuracy, and modifying model inputs demonstrates a technological improvement rather than an abstract idea. However, when the claims are given in the broadest reasonable interpretation, the recited actions for evaluating and updating or retraining a model merely describes obtaining information, analyzing information, and adjusting the inputs used in subsequent analysis. As established in MPEP 2106.04(a)(2(III), these activities fall within the ”mental processes” grouping of abstract ideas. The Federal Circuit in Electric Power Group held that claims directed to “collecting information, analyzing it, and displaying certain results” were abstract, even where the claims recited a detailed analytical workflow and technical data environment. Applicant further asserts on page 11 that the claims cannot reasonably be characterized as mental activities because a human mind could not practically maintain and invoke an attribution model while evaluating accuracy at scale. This argument is not persuasive. Courts have repeatedly explained that claims do not avoid abstraction merely because a computer is used to perform the analysis at greater speed or scale. In Mortgage Grader, for example, the Federal Circuit held that claims were abstract even when implemented on a computer, because the underlying activity remained one of evaluation and decision-making. Likewise here, determining whether attribution accuracy “fails an accuracy criterion” reflects comparison and judgement of predicted versus known results, which are of the same character as activities courts have found abstract, regardless of computational complexity. Applicant argues on page 12 that the claims recite a “specific technique” for defining subsets of assets based on rules and therefore do not monopolize an abstract idea. This argument is not persuasive. The claims do not recite a specific technical technique, but rather recite “defining a subset…based on a rule” and “evaluating…according to an accuracy criterion.” As established in MPEP 2106.04(a)(2)(III), the “mental processes” grouping includes concepts performed in the human mind, such as “observations, evaluations, judgements, and opinions.” The recitation of “rules” and “criteria” used to categorize and evaluate data describes a series of judgements and evaluations that can practically be performed in the human mind. Reciting a computer to perform these steps does not transform the mental process into a technological improvement to computer functionality. Applicant further asserts on page 14 that the Office failed to account the technical context of cybersecurity. However, as established in MPEP 2106.05(a), an important consideration in determining whether a claim improves technology is the extent to which the claim covers a “particular solution to a problem…as opposed to merely claiming the idea of a solution or outcome.” Unlike Enfish, LCC v. Microsoft Corp., where the court found eligibility based on a “specific implementation of a solution” (a self-referential data structure) that improved how a computer stores and retrieves data, the present claims recite high-level logical steps for evaluating accuracy. As noted in MPEP 2106.05(a), an improvement in the information stored by a database or the accuracy of the analysis is not equivalent to an improvement in the computer’s functionality. Because the claims do not recite the specific technical details of how the computer’s operation is modified, but instead of rely on a computer as a tool to perform an abstract mental process, they do not qualify as a technological improvement. Accordingly, Applicant’s arguments have been fully considered but do not overcome the rejection. The claims remain directed to an abstract mental process and do not recite a technological improvement to computer functionality. The rejection under 35 U.S.C. 101 is maintained. 2. Applicant’s arguments filed on October 27, 2025 regarding the rejection of claims 1-20 under 35 U.S.C. 103 have been fully considered but are not persuasive. Applicant contends that the combination of Raz and Kraning fails to teach engineering features corresponding to metadata fields, and that Kraning is deficient because it relates to identifying organizational ownership rather than modifying model features. These arguments are not persuasive. As set forth in MPEP 2143, the Supreme Court in KSR Int’l Co. v. Teleflex, Inc, recognized that obviousness is supported where “known techniques is applied to improve similar systems in the same way.” Raz teaches evaluating performance on data slices and performing feature engineering when performance degrades. Kraning teaches that metadata associated with digital assets – including ownership and registration – is information relevant to asset attribution. Under the Broadest Reasonable Interpretation (BRI), organizational ownership is a metadata attribute that serves as data for the recited feature engineering. Notably, the claims do not specify any particular feature engineering technique or transformation beyond using metadata fields as features and configuring the model to process those features. A person of ordinary skill in the art, seeking to implement or improve a machine-learning-based predictive analytics system that operates over network-accessible data – such as internet traffic, digital assets, or transactional records – in domains including, but not limited to, ecommerce, finance, healthcare, or cybersecurity, would have been motivated to combine the teachings of Raz and Kraning in a single system. In such systems, metadata is commonly used to segment assets for evaluating model accuracy and to inform subsequent refinement of the model when accuracy is found to be inadequate. Once model performance is evaluated on a subset of assets selected based on particular metadata, it would have been a routine design choice for one of ordinary skill in the art to engineer features that include those same metadata fields and configure the model to process them, because those fields define the subset for which the model’s accuracy was determined to be insufficient. Applying metadata used for evaluation as input features for model refinement represents a straightforward and predictable application of known machine-learning optimization techniques to improve model accuracy. Accordingly, the combination of Raz and Kraning renders the claimed invention obvious. The rejection of claims 1-20 under 35 U.S.C. 103 is maintained. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. 101 Subject Matter Eligibility Analysis Step 1: Claims 1-20 are within the four statutory categories (a process, machine, manufacture or composition of matter). Claims 1-8 are directed to a method consisting of a series of steps, meaning that it is directed to the statutory category of process. Claims 9-20 are directed to storage mediums and processors which are machines. Step 2A Prong One, Step 2A Prong Two, and Step 2B Analysis: Step 2A Prong One asks if the claim recites a judicial exception (abstract idea, law of nature, or natural phenomenon). If the claim recites a judicial exception, analysis proceeds to Step 2A Prong Two, which asks if the claim recites additional elements that integrate the abstract idea into a practical application. If the claim does not integrate the judicial exception, analysis proceeds to Step 2B, which asks if the claim amounts to significantly more than the judicial exception. If the claim does not amount to significantly more than the judicial exception, the claim is not eligible subject matter under 35 U.S.C. 101. None of the claims represent an improvement to technology. Regarding claim 1, the following claim elements are abstract ideas: selecting a first subset of a plurality of assets based, at least in part, on a first rule of one or more rules for selecting from the plurality of assets based on metadata of the plurality of assets, wherein the one or more rules correspond to respective subsets of the plurality of assets including the first subset of the plurality of assets with known attributions to one or more organizations in a plurality of organizations (This is an abstract idea of a “mental process.” The limitation recites a mental process involving the application of logical rules to filter asset metadata and identify a subset of assets with known organizational attributions. This is the type of analysis that could be practically performed in the human mind with observation and judgement. For example, a person could review asset records, apply conditional logical (e.g., “select all assets where domain ends in .gov and country is US”), and manually grouping the assets that meet those criteria. Since it involves steps that can be carried out in the human mind or with the aid of pen and paper, it falls within the mental process grouping of abstract ideas. See MPEP 2106.05(a)(2)(III).); based on a determination that accuracy of the asset attribution model on the first subset of the plurality of assets fails an accuracy criterion (This is an abstract idea of a “mental process.” It involves reviewing accuracy score and determining whether it meets a predefined threshold. A person could mentally compare, through observation, the calculated accuracy (e.g., 70%) against a set criterion (e.g., 80%) and conclude the model has failed to meet the requirement. This type of evaluation can be readily performed in the human mind or with simple tools, and thus constitutes an abstract idea of mental process.), The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: updating architecture for the asset attribution model (The step of “updating architecture” for a model is merely a generic data operation that amounts to storing and retrieving information in memory, which has been recognized by the courts as well-understood, routine, and conventional activity. See MPEP 2106.05(d)(II)(iv).) asset attribution model (This is a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05.) inputting metadata for the first subset of the plurality of assets into an asset attribution model to determine an accuracy of the asset attribution model based, at least in part, on a first organization in the plurality of organizations with known attribution to the first subset of the plurality of assets (The step of “inputting metadata” is merely a generic data operation that amounts to storing and retrieving information in memory, which has been recognized by the courts as well-understood, routine, and unconventional activity. See MPEP 2106.05(d)(II)(iv).) Regarding claim 3, the rejection of claim 1 is incorporated herein. Further, claim 3 recites the following abstract ideas: wherein the one or more rules for selecting from the plurality of assets based on metadata of the plurality of assets comprise rules for at least one of assertion testing-based, regression testing-based, location-based, and organization-based metadata in the metadata of the plurality of assets (This is an abstract idea of a “mental process.” It involves judgement to define rules based on observed patterns in metadata, such as location, organizational structure, or past model errors. A person could mentally create an apply such rules – e.g., “select all assets from a certain region” or “group assets that failed prior predictions” – by reviewing and categorizing metadata using logic and experience. This rule formation based on observation, analysis, and classification is a cognitive task that can be practically performed in the human mind, and thus falls within the mental process grouping of abstract ideas. See MPEP 2106.04(a)(2)(III).) Regarding claim 4, the rejection of claim 1 is incorporated herein. Further, claim 4 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: querying a repository for the first subset of the plurality of assets, wherein the query is generated based, at least in part, on logic for metadata of the plurality of assets expressed by the first rule (The step of “querying a repository for” is merely a generic data operation that amounts to storing and retrieving information in memory, which has been recognized by the courts as well-understood, routine, and conventional activity. See MPEP 2106.05(d)(II)(iv).). Regarding claim 5, the rejection of claim 1 is incorporated herein. Further, claim 4 recites the following abstract ideas: based on a determination that accuracy of the asset attribution model on the first subset of the plurality of assets fails an accuracy criterion (This is an abstract idea of a “mental process.” It involves reviewing accuracy score and determining whether it meets a predefined threshold. A person could mentally compare, through observation, the calculated accuracy (e.g., 70%) against a set criterion (e.g., 80%) and conclude the model has failed to meet the requirement. This type of evaluation can be readily performed in the human mind or with simple tools, and thus constitutes an abstract idea of mental process.), The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: retraining the asset attribution model with the updated architecture on asset metadata (The step of “retraining” a model is merely an instruction to apply the abstract idea and does provide a meaningful limitation. See MPEP 2106.05(f).) Regarding claim 6, the rejection of claim 1 is incorporated herein. Further, claim 6 recites the following abstract ideas: wherein the accuracy criterion comprises a determination of whether the asset attribution model correctly predicts a threshold number of assets of the first subset of the plurality of assets according to known attributions of the first subset of the plurality of assets to the one or more organizations (This is an abstract idea of a “mental process.” It involves comparing the number of correct predictions to a threshold to decide where performance is acceptable. A person could review a list of known outcomes, count how many predictions were correct, and judge whether that number meets a required minimum. This type of evaluation – based on observation, counting, and applying a threshold – is a cognitive task that can be performed in the human mind or with pen and paper, and thus qualifies as an abstract idea of a mental process. See MPEP 2106.04(a)(2)(III).) Regarding claim 7, the rejection of claim 1 is incorporated herein. Further, claim 7 recites the following abstract ideas: based on the determination that accuracy of the asset attribution model on the first subset of the plurality of assets fails the accuracy criterion (This is an abstract idea of a “mental process.” It involves reviewing accuracy score and determining whether it meets a predefined threshold. A person could mentally compare, through observation, the calculated accuracy (e.g., 70%) against a set criterion (e.g., 80%) and conclude the model has failed to meet the requirement. This type of evaluation can be readily performed in the human mind or with simple tools, and thus constitutes an abstract idea of mental process.), The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: updating at least one of a type of the asset attribution model, parameters of the asset attribution model, hyperparameters of the asset attribution model, and a training method for the asset attribution model (The step of “updating” a model is merely a generic data operation that amounts to storing and retrieving information in memory, which has been recognized by the courts as well-understood, routine, and conventional activity. See MPEP 2106.05(d)(II)(iv).) Regarding claim 8, the rejection of claim 1 is incorporated herein. Further, claim 8 recites the following abstract ideas: selecting one or more subsets of the plurality of assets based, at least in part, on respective rules in the one or more rules (This is an abstract idea of a “mental process.” It involves using logic and judgement to apply multiple rules to set of data in order to identify subsets. A person can mentally review asset records, apply different conditions (e.g., by location, organization, or metadata type), and separate the data into corresponding groups. This rule-based selection and grouping can be performed mentally or with a pen and paper, and therefore falls without the mental process grouping. See MPEP 2107.04(a)(2(III).); based on a determination that the asset attribution model satisfies accuracy criteria for corresponding subsets of the one or more subsets, deploying the asset attribution model for asset attribution (This is an abstract idea of a “mental process.” It involves reviewing results across different subsets, determining that performance meets criteria, and deciding to proceed with use. A person could evaluate whether predictions for each group are accurate enough, based on that judgement, choose to rely on the method going forward. This type of performance review and decision-making is based on observation and reasoning and practically be performed in the human mind, making it an abstract idea.). Regarding claim 9, the following claim elements are abstract ideas: evaluating an asset attribution model for accuracy in predicting attributed organizations on metadata for a first subset of a plurality of assets, wherein the first subset of the plurality of assets is based, at least in part, on one or more rules for metadata of the plurality of assets, wherein the accuracy in predicting attributed organizations is based on known attributed organizations for assets having metadata that satisfies the one or more rules (This is an abstract idea of a “mental process.” It involves reviewing a group of assets selected using rules, comparing predicted organizations to known ones, and judging how accurate the predictions are. A person could manually apply the rules to choose relevant records, compare each predicted attribution to the correct one, and calculate how often they match. This type of rule-based filtering, observation, and evaluation can be performed in the human mind or with pen and paper, and therefore qualifies as an abstract idea of a mental process.); engineering one or more features for metadata of the plurality of assets based, at least in part, on the one or more rules for metadata of the plurality of assets (This is an abstract idea of a “mental process.” It involves observing metadata attributes and, based on human judgement, creating one or more new features according to a selection rule. For example, a person could examine domain names, apply a rule such as “domain ends in .edu,” and decide to assign a label or flag to those assets. This type of rule-based feature engineering – driven by observation and judgement – can be performed mentally or with a pen and paper, and therefore constitutes an abstract idea. See MPEP 2106.04(a)(2(III).); The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: updating architecture of the asset attribution model to receive the one or more features as additional inputs (The step of “updating architecture” of a model is merely a generic data operation that amounts to storing and retrieving information in memory, which has been recognized by the courts as well-understood, routine, and conventional activity. See MPEP 2106.05(d)(II)(iv).); retraining the asset attribution model with the updated architecture (The step of “retraining” a model is merely an instruction to apply the abstract idea and does provide a meaningful limitation. See MPEP 2106.05(f).). program code (This is a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05.) Regarding claim 10, the rejection of claim 9 is incorporated herein. Further, claim 10 recites the following abstract ideas: to select the first subset of the plurality of assets from the plurality of assets based, at least in part, on a first rule of the one or more rules for metadata of the plurality of assets (This is an abstract idea of a “mental process.” It involves selecting a subset of data based on applying a logical rule to the metadata. A person could examine asset records, applying a condition such as “assets located in a specific country” or “assets with certain domain names,” and separate out the matching entries. This rule-based selection, based on observation and judgement, can practically be done in the human mind or with pen and paper, and thus qualifies as an abstract idea of a mental process.). Regarding claim 11, the rejection of claim 9 is incorporated herein. Further, claim 11 recites the following abstract ideas: to select the first subset of the plurality of assets from the plurality of assets based, at least in part, on a first rule of the one or more rules for metadata of the plurality of assets (This an abstract idea of a “mental process.” It involves observing the performance of the model and making a judgement about whether or not it meets an accuracy requirement. A person could review known outcomes, compare them to the model’s predictions, and decide whether the model is sufficiently accurate based on a defined criterion. This type of evaluation – based on observation and reasoning – can be performed in the human mind or with pen and paper, and thus is apart of the mental process grouping of abstract ideas. See MPEP 2106.04(a)(2)(III).). Regarding claim 12, the rejection of claim 11 is incorporated herein. Further, claim 11 recites the following abstract ideas: determination that a number of correct predictions by the asset attribution model on metadata for the first subset of the plurality of assets is above a threshold number of correct predictions, wherein correct predictions are according to known attributed organizations for assets in the first subset of the plurality of assets (This is an abstract idea of a “mental process.” It involves counting how many predictions are correct and comparing that number to a predefined threshold. A person could manually review each prediction, check it against known correct attributions, tally the correct ones, and decide whether the total exceeds the required number. This type of evaluation, based on observation, counting, and judgement, can practically be performed in the human mind or with pen and paper, and therefore qualifies as an abstract idea.). The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: accuracy criterion (This is a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05.) Regarding claim 13, the rejection of claim 11 is incorporated herein. Further, claim 13 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: update at least one of a type of the asset attribution model, parameters of the asset attribution model, hyperparameters of the asset attribution model, and a training method for the asset attribution model based, at least in part, on the asset attribution model failing the accuracy criterion (The step of “updating” of a model is merely a generic data operation that amounts to storing and retrieving information in memory, which has been recognized by the courts as well-understood, routine, and conventional activity. See MPEP 2106.05(d)(II)(iv).); Regarding claim 14, the rejection of claim 9 is incorporated herein. Further, claim 14 recites the following abstract ideas: select one or more subsets of the plurality of assets from the plurality of assets at least including the first subset of the plurality of assets based, at least in part, on the one or more rules for metadata of the plurality of assets (This is an abstract idea of a “mental process.” It involves reviewing metadata and applying one or more logical rules to select multiple groups of assets, including previously selected subset. A person could mentally apply different conditions – such as by organization, region, or past performance – to sort and group data records. This kind of rule-based selection and grouping, based on observation and reasoning, can be performed in the human mind or with pen and paper, and therefore falls into the abstract idea category of mental process. See MPEP 2106.04(a)(2)(III).); evaluate the asset attribution model for accuracy in predicting attributed organizations on metadata for the one or more subsets of the plurality of assets (The is an abstract idea of a “mental process.” It involves reviewing the model’s predictions for one or more groups of data and determining how accurate those predictions are. A person could examine each predicted organization, compare it to the known correct one, and tally how many are correct within each subset. This evaluation – based – based on observation, comparison, and simple calculation – can practically be performed in the human mind or with pen and paper, and therefore qualifies as an abstract idea.); determination that accuracy of the asset attribution model in predicting attributed organizations on metadata for the one or more subsets of the plurality of assets satisfies respective accuracy criteria,(This is an abstract idea of a “mental process.” It involves reviewing whether the model’s predictions for each subset meet certain accuracy standards, and then deciding to use the model based on that determination. A person could assess each group’s results, judge whether they are good enough according to predefined criteria, and choose to proceed with using the method. This type of performance-based decision-making, grounded in observation and reasoning, can be performed in the human mind or with pen and paper, and thus qualifies as an abstract idea for mental process. See MPEP 2106.04(a)(2)(III).) The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: deploying the asset attribution model (The step of “deploying” a model is merely a generic implementation that amounts to transmitting data over a network, which has been recognized by the courts as well-understood, routine, and conventional activity. See MPEP 2106.05(d)(II)(i).) Regarding claim 15, the following claim elements are abstract ideas: for each data slicing rule of a plurality of data slicing rules that correspond to different characteristics of network accessible assets, obtain a data slice according to the data slicing rule from a repository of data about a plurality of network accessible assets with known attributions to one or more organizations of a plurality of organizations (This is an abstract idea of a mental process (This is an abstract idea of a “mental process.” It involves using a set of rules – each based on a different observable – to identify and extract matching groups of data from a larger set. A person could examine a list of asset records, apply each rule one by one (e.g., by location, organization, or device type), and manually group the assets that satisfy each rule. This kind of iterative rule-based sorting and selection can be performed in through observation and reasoning using pen and paper, and therefore qualifies as an abstract idea of a mental process. See MPEP 2106.04.); evaluate accuracy of the machine learning model with the known attributions corresponding to the organization attribution predictions (This is an abstract idea of a “mental process.” It involves comparing predicted outcomes to known correct answers and determining how accurate the predictions are. A person could manually review the model’s predicted organizations, match them against known attributions, and calculate how many predictions were correct. This type of evaluation – based on observation, comparison, and simple calculation – can practically be performed mentally or with a pen and paper, and therefore qualifies as an abstract idea under the mental process grouping. The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: a processor (This is a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05.); a computer-readable medium having instructions stored thereon that are executable by the processor (This is a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05.); obtain organization attribution predictions from a machine learning model based, at least in part, on the obtained data slices (This limitation amounts to adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP 2106.05(g). Obtaining predictions from a machine learning model (i.e., mere data gathering in conjunction with an abstract idea) is directed to a well understood routine conventional activity of data transmission see 2106.05(d)(II)(i).); Regarding claim 18, the rejection of claim 16 is incorporated herein. Further, claim 18 recites the following abstract ideas: determination that accuracy of the machine learning model for at least a first of the obtained data slices fails the accuracy criterion (This is an abstract idea of a “mental process.” It involves reviewing the model’s performance on a specific group of data and deciding that its accuracy does not meet a required standard. A person could manually compare the model’s predictions to known results for that subset, calculate how accurate the predictions are, and conclude – based on observation and reasoning – that the accuracy falls short of the expected level. This type of evaluation can be performed mentally or with pen and paper, and therefore qualifies as an abstract under the mental process grouping.). The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: update at least one of a type of the machine learning model, parameters of the machine learning model, hyperparameters of the machine learning model, and a training method for the machine learning model based, at least in part, on the…(The step of “updating ” a model is merely a generic data operation that amounts to storing and retrieving information in memory, which has been recognized by the courts as well-understood, routine, and conventional activity. See MPEP 2106.05(d)(II)(iv).). Regarding claim 19, the rejection of claim 15 is incorporated herein. Further, claim 19 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: rules for obtaining at least one of assertion testing-based data slices, regression testing-based data slices, location-based data slices, and organization-based data slices (The steps add insignificant extra-solution activity to the judicial exception, as discussed in MPEP 2106.05(g). These elements represent generic computer functions (i.e., e.g., mere data gathering and transmitting steps in conjunction with the abstract idea).). Regarding claim 20, the rejection of claim 15 is incorporated herein. Further, claim 20 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: update the repository with additional data about network accessible assets with additional data obtained from ongoing network scanning (The step of “updating the repository” is merely a generic data operation that amounts to storing and retrieving information in memory, which has been recognized by the courts as well-understood, routine, and conventional activity. See MPEP 2106.05(d)(II)(iv).). Regarding claim 21, the rejection of claim 1 is incorporated herein. Further, claim 21 recites the following abstract ideas: determining a number of the one or more features based, at least in part, on the accuracy of the asset attribution model on the first subset of the plurality of assets, wherein the number of the one or more features is determined to be higher for a lower accuracy of the asset attribution model and lower for a higher accuracy of the asset attribution model (This is an abstract idea of a “mental process.” It involves reviewing the accuracy of a model, evaluating whether the accuracy is high or low, and deciding to increase or decrease the number of features accordingly. For example, a person could observe model performance results, determine the accuracy is too low, and mentally decide to add more features, or alternatively conclude that accuracy is sufficient and decide to use fewer features. This type of conditional performance-based decision-making is based on observation, comparison, and judgement, and can be practically performed in the human mind. Therefore, it falls within the mental process groupings of abstract ideas. See MPEP 2106.04(a)(2)(III).). The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein engineering the one or more features (This is merely an instruction to apply the abstract idea and does provide a meaningful limitation.) Regarding claim 22, the rejection of claim 9 is incorporated herein. Further, claim 22 recites the following abstract ideas: determine a number of the one or more features based, at least in part, on accuracy of the asset attribution model in predicting attributed organizations, wherein the number of the one or more features is determined to be higher for a lower accuracy of the asset attribution model and lower for a higher accuracy of the asset attribution model (This is an abstract idea of a “mental process.” It involves reviewing the accuracy of a model, evaluating whether the accuracy is high or low, and deciding to increase or decrease the number of features accordingly. For example, a person could observe model performance results, determine the accuracy is too low, and mentally decide to add more features, or alternatively conclude that accuracy is sufficient and decide to use fewer features. This type of conditional performance-based decision-making is based on observation, comparison, and judgement, and can be practically performed in the human mind. Therefore, it falls within the mental process groupings of abstract ideas. See MPEP 2106.04(a)(2)(III).). Regarding claim 23, the rejection of claim 15 is incorporated herein. Further, claim 22 recites the following abstract ideas: determine a number of the one or more features based, at least in part, on accuracy of the machine learning model for the first of the obtained data slices, wherein the number of the one or more features is determined to be higher for a lower accuracy of the machine learning model and lower for a higher accuracy of the machine learning model (This is an abstract idea of a “mental process.” It involves reviewing the accuracy of a model, evaluating whether the accuracy is high or low, and deciding to increase or decrease the number of features accordingly. For example, a person could observe model performance results, determine the accuracy is too low, and mentally decide to add more features, or alternatively conclude that accuracy is sufficient and decide to use fewer features. This type of conditional performance-based decision-making is based on observation, comparison, and judgement, and can be practically performed in the human mind. Therefore, it falls within the mental process groupings of abstract ideas. See MPEP 2106.04(a)(2)(III).). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under the 35 U.S.C. 103 as being unpatentable over Raz et al., (Pat. No.: US 11734143 B2 (File: 2020)) in view of Kraning et al., (Pub. No.: US 20210105304 A1 (Published: 04/08/2021)). Regarding claim 1, Raz teaches the following limitations: A method comprising: selecting a first subset of a plurality of assets based (Raz, [Abstract] mentions “determining a plurality of data slices that are subsets of the dataset.”), at least in part, on a first rule of one or more rules (Raz, [col. 6, lines 46-48] mentions “the data slice may be determined by obtaining a definition of data slice and applying the definition to identify the instances that are included in the data slice.”) for selecting from the plurality of assets based on metadata (Raz, [col. 11, lines 50-51] mentions “Each data instance may be associated with at least one metadata value.”) of the plurality of assets, wherein the one or more rules correspond to respective subsets of the plurality of assets including the first subset of the plurality of assets with known attributions (Raz, [col. 1, lines 34-35] mentions “wherein each data instance is associated with a label;” – The labels defined in Raz are not labels that describe attribution to organizations.] to one or more organizations in a plurality of organizations; inputting metadata for the first subset of the plurality of assets into an asset attribution model (Raz, [col. 4, lines 40-42] “ The feature values may be utilized as an input for the machine learning model, as an input for the predictor, or the like.” [Raz, col. 11, lines 45-50] further mentions “ Each data instance may comprise features values in a feature space…the dataset may comprise metadata values in a metadata space.”] to determine an accuracy of the “predictive model” based (Raz, [col. 7, lines 39-45] mentions “On Step 140, a performance measurement of the predictor over each data slice may be computed. In some exemplary embodiments, the performance measurement may be indicative of a successful estimation of labels to data instances comprised by the data slice. In some exemplary embodiments, the performance measurement may measure how well the predictor predicts the actual label.”) at least in part, on a first organization in the plurality of organizations with known attribution (Raz, [col. 1, lines 34-35] mentions “wherein each data instance is associated with a label;” – The labels defined in Raz are not labels that describe attribution to organizations.] to the first subset of the plurality of assets; based on a determination that accuracy of the “predictive model” on the first subset of the plurality of assets fails an accuracy criterion (Raz, [col. 7, lines 50-55] mentions “The performance measurement may be computed based on the number of instances, based on the number of instances for which a correct prediction was provided, or the like. In some exemplary embodiments, the performance measurement may be based on, for example, F1 score, Accuracy, R-squared, RSME, or the like.”), updating architecture for the “predictive model” (Raz, [col. 4, lines 15-17] mentions “In case that the performance measurement of the predictor over the dataset is below a threshold, a mitigating action may be performed.” (Raz, [col.4, lines 45-49] further mentions “Additionally or alternatively, the mitigating action may comprise changing the architecture of the model used to train the predictor, such as modifying an architecture of a network-based model, modifying the number of layers, the number of nodes in a layer, or the like.”). wherein updating architecture for the “predictive model” comprises, engineering one or more features that comprise one or more metadata fields of the plurality of assets indicated in the first rule for selecting the plurality of assets (Raz, [col. 4, lines 15-17 and 45-54] “In case that the performance measurement of the predictor over the dataset is below a threshold, a mitigating action may be performed…the mitigating action may comprise changing the architecture of the model used to train the predictor, such as modifying an architecture of a network-based model...Additionally or alternatively, the mitigating action may comprise feature engineering in order to change a feature, add a feature, remove a feature, or the like.” [col. 12, lines 6-7] “Data slices may be defined by constraints on the features values, on the meta values, or the like.” – Under the broadest reasonable interpretation, the “first rule for selecting the plurality of assets” corresponds to Raz’s slice definitions, which are created using constraints on feature and metadata values of the assets. Raz further teaches that when model accuracy on a slice fails a threshold, the model architecture is updated using feature engineering to add or modify features. Because the slice is defined using metadata constraints, the engineered features necessarily comprise the same metadata fields that are indicated in the rule used to define and select the subsets of assets.) ; and configuring an input component of the asset attribution model to process the one or more features (Raz, [col. 4, lines 40-54] “The feature values may be utilized as an input for the machine learning model, as an input for the predictor, or the like. In some exemplary embodiments, the mitigating action may comprise obtaining an additional dataset and retraining the predictor therewith…such as modifying an architecture of a network-based model, modifying the number of layers, the number of nodes in a layer, or the like… Additionally or alternatively, the mitigating action may comprise feature engineering in order to change a feature, add a feature, remove a feature, or the like.”); deploying the “predictive model” with the updated architecture to predict attribution of one or more assets to an organization in the plurality of organizations (Raz, [col. 8, lines 66-67 and col. 9, lines 1-6] “On Step 165, as the performance measurement of the predictor is above a threshold, the predictor may be utilized. In some exemplary embodiments, the predictor may be utilized in order to provide a predicted label for a data instance that is not comprised by the dataset. In some exemplary embodiments, the predictor may be deployed in the field, may be provided as part of an update of a software utilizing the predictor, or the like.”); and However, Raz does not teach but Raz in view of Kraning teaches the limitations: plurality of assets with known attributions to one or more organizations in a plurality of organizations (Kraning, paragraph [0126] mentions “In some embodiments, machine learning may be applied to identify network assets associated with an entity. For example, data included in responses from multiple network systems can be input into and processed using one or more machine learning models to identify information included in the responses that is indicative of network assets associated with the entity.”;) plurality of assets based on metadata (Kraning, paragraph [0133] mentions “asset data may include…metadata associated with a network asset.”); evaluating the one or more assets for security risks based on exposure to the Internet (Kraning, paragraph [0226] “In some embodiments, a view may include a displayed listing of network-facing assets representing access points of the combined attack surface. The listing may include details associated with each asset including asset identifier, asset type, asset behavioral characteristics, asset risk assessment, asset security/risk history, communications logs associated with the asset, ownership/responsibility for the asset (e.g., an entity identifier), etc.”). Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having a combination of Raz and Kraning before them, to incorporate the use of subsets of network assets with known organizational ownership, as taught by Kraning, into the slice-based model evaluation system of Raz. One would have been motivated to make such a combination in order to evaluate how accurately a machine learning model predicts asset ownership by an organization within specific data segments, and to use these evaluation results to guide improvements to model architecture. This would allow more precise and reliable attribution of assets to organizations by identifying and addressing model weaknesses on targeted slices of asset metadata. Regarding claim 3, Raz in view of Kraning, as outlined above, all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1, mutatis mutandis. Raz in view of Kraning further teaches: rules for at least one of assertion testing-based, regression testing-based (Raz, [col. 6, lines 23-28] mentions “Additionally or alternatively, the predictor may be trained based on the dataset by utilizing algorithms such as but not limited to Linear Regression, Logistic Regression, Classification and Regression Tree (CART), Naïve Bayes, K-Nearest Neighbors (KNN), K-means, Principal Component Analysis (PCA), or the like)”), location-based , and organization-based metadata in the metadata of the plurality of assets (Kraning, Abstract mentions “ Response data is received from one or more network systems connected to the computer network and processed to identify one or more network assets associated with an entity such as an enterprise organization.” paragraph [0133] further mentions “asset data may include… metadata associated with a network asset.”). Regarding claim 4, Raz in view of Kraning, as outlined above, all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1, mutatis mutandis. Raz in view of Kraning further teaches: wherein selecting the first subset of the plurality of assets based, at least in part, on the first rule (Raz, [col. 12, lines 5-7] mentions “Slices Definitions Obtainer 230 may be configured to obtain definitions of data slices. Data slices may be defined by constraints on the features values, on the meta values, or the like.”) comprises querying a repository for the first subset of the plurality of assets, wherein the query is generated based (Kraning, paragraph [0128] mentions “ if a domain name registration is identified as a network asset, step 620 may include querying automatically an external registration database of a registration authority (e.g., a WHOIS lookup search). Identifying information regarding a related entity (e.g., a name, account identifier, email address, etc.) that registered the domain name may be returned in response to the registration database. Other similar processes may be performed to retrieve information indicative of a related entity that is responsible for the network asset.”), at least in part, on logic for metadata of the plurality of assets expressed by the first rule. Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, having a combination of Raz and Kraning before them, to incorporate querying of a database to retrieve data about network-accessible assets, as taught by Kraning, into the slice-based model evaluation system of Raz. One would have been motivated to make such a combination in order to enable dynamic access to relevant asset metadata from an external source, thereby allowing the evaluation system to retrieve only the data necessary for attribution analysis and model input. This would improve efficiency, reduce memory overhead, and support real-time evaluation of attribution models on fresh or targeted subsets of asset data. Regarding claim 5, Raz in view of Kraning, as outlined above, all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1, mutatis mutandis. Raz in view of Kraning further teaches: based on the determination that the accuracy of the asset attribution model on the first subset of the plurality of assets fails the accuracy criterion (Raz, [col. 7, lines 50-55] mentions “The performance measurement may be computed based on the number of instances, based on the number of instances for which a correct prediction was provided, or the like. In some exemplary embodiments, the performance measurement may be based on, for example, F1 score, Accuracy, R-squared, RSME, or the like.” (Raz, [col. 4, lines 15-17] further mentions “In case that the performance measurement of the predictor over the dataset is below a threshold, a mitigating action may be performed.” ) , retraining the asset attribution model with the updated architecture on asset metadata (Raz, [col. 4, lines 42-49] mentions “ In some exemplary embodiments, the mitigating action may comprise obtaining an additional dataset and retraining the predictor therewith. Additionally or alternatively, the mitigating action may comprise changing the architecture of the model used to train the predictor, such as modifying an architecture of a network-based model, modifying the number of layers, the number of nodes in a layer, or the like.”) Regarding claim 6, Raz in view of Kraning, as outlined above, all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1, mutatis mutandis. Raz in view of Kraning further teaches: a determination of whether the asset attribution model correctly predicts (Raz, [col. 1, lines 40-46] mentions “computing, for each data slice…the performance measurement is indicative of a successful label prediction for a data instance comprised by the data slice,”} a threshold number of assets of the first subset of the plurality of assets (Raz, [col. 7, lines 59-65] mentions “ the performance measurement is indicative of a successful label prediction for a data instance comprised by the data slice” [col. 13, lines 55-60] further mentions “MinSize may refer to a minimal size threshold. The minimal size threshold may define the minimal number of data instances comprised by a data slice (e.g., s) that is required in order that P(s) may have a positive effect on the performance measurement of the predictor over the dataset. In some exemplary embodiments, the predictor may be trained based on a machine learning model. As machine learning models may require a sufficient number of examples to learn from, a minimal size may be required in order to ensure the correctness of a performance measurement over a data slice.”) according to known attributions of the first subset of the plurality of assets to the one or more organizations (Kraning, paragraph [0126] mentions “In some embodiments, machine learning may be applied to identify network assets associated with an entity. For example, data included in responses from multiple network systems can be input into and processed using one or more machine learning models to identify information included in the responses that is indicative of network assets associated with the entity.”;). Regarding claim 7, Raz in view of Kraning, as outlined above, all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1, mutatis mutandis. Raz in view of Kraning further teaches: based on the determination that accuracy of the asset attribution model on the first subset of the plurality of assets fails the accuracy criterion, (Raz, [col. 7, lines 50-55] mentions “The performance measurement may be computed based on the number of instances, based on the number of instances for which a correct prediction was provided, or the like. In some exemplary embodiments, the performance measurement may be based on, for example, F1 score, Accuracy, R-squared, RSME, or the like.” (Raz, [col. 4, lines 15-17] further mentions “In case that the performance measurement of the predictor over the dataset is below a threshold, a mitigating action may be performed.” ) updating at least one of a type of the asset attribution model (Raz, [col. 9, lines 44-46] mentions “Additionally or alternatively, the action may comprise changing the machine learning algorithm to a different machine learning algorithm.”), parameters of the asset attribution model (Raz, [col. 4, lines 43-45] mentions “ the mitigating action may comprise obtaining an additional dataset and retraining the predictor therewith.”), hyperparameters of the asset attribution model (col. 4, lines 45-49 “the mitigating action may comprise changing the architecture of the model used to train the predictor, such as modifying an architecture of a network-based model, modifying the number of layers, the number of nodes in a layer, or the like.”), and a training method for the asset attribution model (Raz, [col. 5, lines 2-6] mentions “Additionally or alternatively, the mitigating action may comprise re-training the ANN by utilizing a different algorithm than Gradient descent, such as for example, Newton's method, Conjugate gradient, Levenberg-Marquardt algorithm, or the like.”). Regarding claim 8, Raz in view of Kraning, as outlined above, all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1, mutatis mutandis. Raz in view of Kraning further teaches: selecting one or more subsets of the plurality of assets based (Raz, [col. 1, lines 58-63] mentions “determining a plurality of data slices, wherein at least one data slice of the plurality of data slices comprising a data instance comprised by the dataset; determining a slice for analysis, wherein the plurality of data slices comprise one or more sub-slices of the slice for analysis, wherein the one or more sub-slices consist of instances that are comprised by the slice for analysis;”), at least in part, on respective rules in the one or more rules (Raz, [col. 6, lines 51-60] mentions “ In some exemplary embodiments, the constraint may represent a definition of the data slice. The constraint may be a constraint on one or more feature values in the features space. On Step 138, the constraint may be applied on the dataset. Applying the constraint may comprise identifying at least one data instance for which the constraint is held, e.g., the one or more feature values of the at least one data instance are in line with the constraint. The at least one identified data instance may be a member of the data slice.”); and based on a determination that the asset attribution model satisfies accuracy criteria for corresponding subsets of the one or more subsets (Raz, [col. 7, lines 39-43] mentions “On Step 140, a performance measurement of the predictor over each data slice may be computed. In some exemplary embodiments, the performance measurement may be indicative of a successful estimation of labels to data instances comprised by the data slice.” [col 8, lines 49-53] further mentions “On Step 160, it may be determined whether the performance measurement of the predictor over the dataset is below a threshold or above the threshold. In case that the performance measurement of the predictor over the dataset is above the threshold, Step 165 may be performed.”), deploying the asset attribution model for asset attribution (Raz, [col. 8. lines 66-67 & col. 9, lines 1-5] mentions “On Step 165, as the performance measurement of the predictor is above a threshold, the predictor may be utilized. In some exemplary embodiments, the predictor may be utilized in order to provide a predicted label for a data instance that is not comprised by the dataset. In some exemplary embodiments, the predictor may be deployed in the field,”) Regarding claim 9, Raz teaches the following limitations: A non-transitory, computer-readable medium having program code stored thereon, the program code comprising instructions Raz, [col. 2, lines 13-16] mentions “ a computer program product comprising a non-transitory computer readable storage medium retaining program instructions, which program instructions when read by a processor,”): evaluate a “predictive model” for accuracy in predicting (Raz, [col. 1, lines 40-46] mentions “computing, for each data slice… the performance measurement is indicative of a successful label prediction for a data instance comprised by the data slice,”} attributed organizations on metadata for a first subset of a plurality of assets, wherein the first subset of the plurality of assets is based, at least in part, on one or more rules for metadata of the plurality of assets (Raz, [col. 10, lines 40-45] mentions “In some exemplary embodiments, a meta-predictor may be trained to predict, based on a set of data slices, definitions thereof, parameters of the feature space, size of each data slice, size of the dataset, combination thereof, or the like, the data slice that the domain expert will select.”) [col. 11, lines 50-51] further mentions “Each data instance may be associated with at least one metadata value.”), wherein the accuracy in predicting attributed organizations is based on known attributed organizations for assets having metadata that satisfies the one or more rules; engineer one or more features that comprise one or more metadata fields indicated in the one or more rules for metadata of the plurality of assets (Raz, [col. 4, lines 50-54] mentions “Additionally or alternatively, the mitigating action may comprise changing the model utilized by the predictor. Additionally or alternatively, the mitigating action may comprise feature engineering in order to change a feature, add a feature, remove a feature, or the like.” [col 12, lines 4-7] mentions “Slices Definitions Obtainer 230 may be configured to obtain definitions of data slices. Data slices may be defined by constraints on the features values, on the meta values, or the like.”); update architecture of the “predictive model” to receive the one or more features as additional inputs, wherein instructions to update architecture of the “predictive model” comprise instructions to configure an input component of the “predictive model” to process the one or more features; retraining the “predictive model” with the updated architecture. (Raz, [col. 4, lines 40-49] mentions “The feature values may be utilized as an input for the machine learning model, as an input for the predictor, or the like. In some exemplary embodiments, the mitigating action may comprise obtaining an additional dataset and retraining the predictor therewith. Additionally or alternatively, the mitigating action may comprise changing the architecture of the model used to train the predictor, such as modifying an architecture of a network-based model, modifying the number of layers, the number of nodes in a layer, or the like.”); deploy the retrained “predictive model” with the updated architecture to predict attribution of one or more assets to an organization in the known attributed organizations (Raz, [col. 8, lines 66-67 and col. 9, lines 1-6] “On Step 165, as the performance measurement of the predictor is above a threshold, the predictor may be utilized. In some exemplary embodiments, the predictor may be utilized in order to provide a predicted label for a data instance that is not comprised by the dataset. In some exemplary embodiments, the predictor may be deployed in the field, may be provided as part of an update of a software utilizing the predictor, or the like.”); and However, Raz does not teach but Raz in view of Kraning teaches the limitation: predicting attributed organizations is based on known attributed organizations for assets having metadata (Kraning, Abstract mentions “Response data is received from one or more network systems connected to the computer network and processed to identify one or more network assets associated with an entity such as an enterprise organization.” Paragraph [0086] further mentions “ The entity 220 may represent an enterprise organization (as previously mentioned) or may represent other types of entities such as government organizations, educational organizations, network service providers, individuals, or any other types of entities.” [0133] further mentions “asset data may include…metadata associated with a network asset.”) the plurality of assets (Kraning, Abstract mentions “Network assets may include ephemeral Internet-accessible assets such as IP addresses, domain names, digital certificates, and cloud infrastructure accounts.”) evaluate the one or more assets for security risks based on exposure to the Internet (Kraning, paragraph [0226] “In some embodiments, a view may include a displayed listing of network-facing assets representing access points of the combined attack surface. The listing may include details associated with each asset including asset identifier, asset type, asset behavioral characteristics, asset risk assessment, asset security/risk history, communications logs associated with the asset, ownership/responsibility for the asset (e.g., an entity identifier), etc.”). Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having a combination of Raz and Kraning before them, to incorporate the use of subsets of network assets with known organizational ownership, as taught by Kraning, into the slice-based model evaluation system of Raz. One would have been motivated to make such a combination in order to evaluate how accurately a machine learning model predicts asset ownership by an organization within specific data segments, and to use these evaluation results to guide improvements to model architecture. This would allow more precise and reliable attribution of assets to organizations by identifying and addressing model weaknesses on targeted slices of asset metadata. Regarding claim 10, Raz in view of Kraning, as outlined above, all the elements of claim 9, therefore is rejected for the same reasons as those presented for claim 9, mutatis mutandis. Raz in view of Kraning further teaches: select the first subset of the plurality of assets from the plurality of assets based, at least in part, on a first rule of the one or more rules (Raz, [col. 6 lines 46-48] mentions “ the data slice may be determined by obtaining a definition of data slice and applying the definition to identify the instances that are included in the data slice.” [col 12, lines 4-7] further mentions “Slices Definitions Obtainer 230 may be configured to obtain definitions of data slices. Data slices may be defined by constraints on the features values, on the meta values, or the like.”) for metadata of the plurality of assets (Kraning [0133] mentions “asset data may include…metadata associated with a network asset.” [col. 11, lines 50-51] further mentions “Each data instance may be associated with at least one metadata value.”)). Regarding claim 11, Raz in view of Kraning, as outlined above, all the elements of claim 9, therefore is rejected for the same reasons as those presented for claim 9, mutatis mutandis. Raz in view of Kraning further teaches: determine whether accuracy of the asset attribution model (Raz, [col. 1, lines 37-45] mentions “determining a plurality of data slices, wherein at least one data slice of the plurality of data slices comprising a data instance comprised by the dataset; computing, for each data slice in the plurality of data slices, a performance measurement of the predictor over the data slice, wherein said computing is based on an application of the predictor on each data instance that is mapped to the data slice, wherein the performance measurement is indicative of a successful label prediction”) in predicting attributed organizations (Kraning Abstract mentions “Response data is received from one or more network systems connected to the computer network and processed to identify one or more network assets associated with an entity such as an enterprise organization.”) satisfies an accuracy criterion (Raz, [col. 8, lines 49-51] mentions “ On Step 160, it may be determined whether the performance measurement of the predictor over the dataset is below a threshold or above the threshold.”). Regarding claim 12, Raz in view of Kraning, as outlined above, all the elements of claim 11, therefore is rejected for the same reasons as those presented for claim 11, mutatis mutandis. Raz in view of Kraning further teaches: determination that a number of correct predictions by the asset attribution model on metadata for the first subset of the plurality of assets is above a threshold number of correct predictions (Raz, [col. 8 lines 32-34] mentions “On Step 150, a performance measurement of the predictor over the dataset may be computed. In some exemplary embodiments, Step 140 may result in a plurality of performance measurements.” [col. 8, lines 49-53] further mentions “On Step 160, it may be determined whether the performance measurement of the predictor over the dataset is below a threshold or above the threshold. In case that the performance measurement of the predictor over the dataset is above the threshold, Step 165 may be performed.” [col. 8, lines 66-67] further mentions “On Step 165, as the performance measurement of the predictor is above a threshold, the predictor may be utilized.”), wherein correct predictions are according to known attributed organizations for assets in the first subset of the plurality of assets (Kraning, Abstract mentions “Response data is received from one or more network systems connected to the computer network and processed to identify one or more network assets associated with an entity such as an enterprise organization.” Paragraph [0086] further mentions “ The entity 220 may represent an enterprise organization (as previously mentioned) or may represent other types of entities such as government organizations, educational organizations, network service providers, individuals, or any other types of entities.”). Regarding claim 13, Raz in view of Kraning, as outlined above, all the elements of claim 11, therefore is rejected for the same reasons as those presented for claim 11, mutatis mutandis. Raz in view of Kraning further teaches: update at least one of a type of the asset attribution model, parameters of the asset attribution model, hyperparameters of the asset attribution model, and a training method for the asset attribution model based (Raz, [col. 9, lines 36 – 51] mentions “the predictor may be trained based on a machine learning model. In that embodiment, the mitigating action may comprise changing the network architecture, the algorithm utilized to train the network, or the like (176). In some exemplary embodiments, layers may be added to the ANN, a layer may be removed from the ANN, a node may be added to the ANN, connectivity between nodes or layers may be modified, or the like. Additionally or alternatively, the action may comprise changing the machine learning algorithm to a different machine learning algorithm. As an example, the predictor may be trained based on a decision tree classifier. The mitigating action may comprise changing the machine learning algorithm into a random forest classifier and retraining the predictor, changing a machine learning algorithm into a rule based logic,”), at least in part, on the asset attribution model failing the accuracy criterion , (Raz, [col. 4, lines 15-17] mentions “In case that the performance measurement of the predictor over the dataset is below a threshold, a mitigating action may be performed.” [col. 7, lines 50-55] further mentions “The performance measurement may be computed based on the number of instances, based on the number of instances for which a correct prediction was provided, or the like. In some exemplary embodiments, the performance measurement may be based on, for example, F1 score, Accuracy, R-squared, RSME, or the like.”). Regarding claim 14, Raz in view of Kraning, as outlined above, all the elements of claim 9, therefore is rejected for the same reasons as those presented for claim 9, mutatis mutandis. Raz in view of Kraning further teaches: select one or more subsets of the plurality of assets from the plurality of assets at least including the first subset of the plurality of assets (Raz, [Abstract] mentions “determining a plurality of data slices that are subsets of the dataset.”), based, at least in part, on the one or more rules for metadata of the plurality of assets (Raz, [col. 6, lines 46-48] mentions “the data slice may be determined by obtaining a definition of data slice and applying the definition to identify the instances that are included in the data slice.” Kraning, Abstract mentions “Network assets may include ephemeral Internet-accessible assets such as IP addresses, domain names, digital certificates, and cloud infrastructure accounts.” Paragraph [0133] further mentions “asset data may include…metadata associated with a network asset.”) ; evaluate the asset attribution model for accuracy (Raz, [col. 1, lines 40-46] mentions “computing, for each data slice… the performance measurement is indicative of a successful label prediction for a data instance comprised by the data slice,” } in predicting attributed organizations on metadata for the one or more subsets of the plurality of assets (Kraning, Abstract mentions “Response data is received from one or more network systems connected to the computer network and processed to identify one or more network assets associated with an entity such as an enterprise organization.” Paragraph [0133] mentions “asset data may include…metadata associated with a network asset.”); based on a determination that accuracy of the asset attribution model (Raz, [col. 1, lines 37-45] mentions “determining a plurality of data slices, wherein at least one data slice of the plurality of data slices comprising a data instance comprised by the dataset; computing, for each data slice in the plurality of data slices, a performance measurement of the predictor over the data slice, wherein said computing is based on an application of the predictor on each data instance that is mapped to the data slice, wherein the performance measurement is indicative of a successful label prediction”) in predicting attributed organizations on metadata for the one or more subsets of the plurality of assets (Kraning, Abstract mentions “Response data is received from one or more network systems connected to the computer network and processed to identify one or more network assets associated with an entity such as an enterprise organization.” Paragraph [0133] mentions “asset data may include…metadata associated with a network asset.”) satisfies respective accuracy criteria, deploying the asset attribution model (Raz, [col 8, lines 1-5] “Table 410 in FIG. 4 illustrates the performance measurement of the predictor over data slices. In the illustrated example, the performance measurement is based on the size of the slice and on the percentage of correct predictions in the data slice (success ratio). “[col 8, lines 66-67 & col. 9 lines 1-6] mentions “On Step 165, as the performance measurement of the predictor is above a threshold, the predictor may be utilized. In some exemplary embodiments, the predictor may be utilized in order to provide a predicted label for a data instance that is not comprised by the dataset. In some exemplary embodiments, the predictor may be deployed in the field, may be provided as part of an update of a software utilizing the predictor, or the like.”). Regarding claim 15, Raz teaches the following limitations: An apparatus comprising: a processor (Raz, [col. 11, lines 18-19] mentions “In some exemplary embodiments, Apparatus 200 may comprise one or more Processor(s) 202.“); and a computer-readable medium having instructions stored thereon that are executable by the processor to cause the apparatus to (Raz, [col. 11, lines 37-42] mentions “Memory 207 may retain program code operative to cause Processor 202 to perform acts associated with any of the subcomponents of Apparatus 200. Memory 207 may comprise one or more components as detailed below, implemented as executables, libraries, static libraries, functions, or any other executable components.”) for each data slicing rule of a plurality of data slicing rules (Raz, [col. 12, lines 4-7] mentions “Slices Definitions Obtainer 230 may be configured to obtain definitions of data slices. Data slices may be defined by constraints on the features values, on the meta values, or the like.”) that correspond to different characteristics of network accessible assets, obtain a data slice according to the data slicing rule (Raz, Abstract mentions “obtaining a dataset comprising data instances. Each data instance is associated with a label; obtaining a predictor. The predictor is configured to provide a prediction of a label for a data instance; determining a plurality of data slices that are subsets of the dataset.”) from a repository of data about a plurality of network accessible assets with known attributions to one or more organizations of a plurality of organizations; obtain organization attribution predictions from a machine learning model based, at least in part, on the obtained data slices (Raz, [col. 11, lines 57-63] mentions “ Predictor 212 may be configured to provide a predicted label for an input such as a data instance…In some exemplary embodiments, obtaining Predictor 212 may comprise training a machine learning model based on the dataset.” [col. 12, lines 20-26] further mentions “Slices Determinator 240 may be configured to determine a data slice based on the definition of data slice obtained by Slices Definitions Obtainer 230…. Slices Determinator 240 may be configured to apply a constraint on the dataset in order to identify data instances that are members of a data slice.”); evaluate accuracy of the machine learning model (Raz, [col. 1, lines 37-45] mentions “determining a plurality of data slices, wherein at least one data slice of the plurality of data slices comprising a data instance comprised by the dataset; computing, for each data slice in the plurality of data slices, a performance measurement of the predictor over the data slice, wherein said computing is based on an application of the predictor on each data instance that is mapped to the data slice, wherein the performance measurement is indicative of a successful label prediction”) with the known attributions corresponding to the organization attribution predictions. based on a determination that accuracy of the machine learning model for at least a first of the obtained data slices fails an accuracy criterion, update architecture of the machine learning model (Raz, [col. 8, lines 49-51] “On Step 160, it may be determined whether the performance measurement of the predictor over the dataset is below a threshold or above the threshold. [col. 9, lines 7-9] “On Step 170, as the performance measurement of the predictor over the dataset is below a threshold, a mitigating action may be performed.” [col. 9, lines 38-43] “the mitigating action may comprise changing the network architecture, the algorithm utilized to train the network, or the like (176). In some exemplary embodiments, layers may be added to the ANN, a layer may be removed from the ANN, a node may be added to the ANN, connectivity between nodes or layers may be modified, or the like.), wherein the instructions to update architecture of the machine learning model comprises instructions executable by the processor to cause the apparatus to, engineer one or more features that comprise one or more metadata fields of the plurality of network accessible assets indicated in a first of the plurality of data slicing rules corresponding to the first of the obtained data slices (Raz, [col. 6-7] “Data slices may be defined by constraints on the features values, on the meta values, or the like.” [col. 4, lines 52-54] “Additionally or alternatively, the mitigating action may comprise feature engineering in order to change a feature, add a feature, remove a feature, or the like.” – teaches that each data slice is defined using constraints on feature and metadata values, and when performance on such a slice fails a threshold, feature engineering is performed to modify the model inputs. Because the slice is defined using metadata constraints, the engineered features necessarily comprise the metadata fields indicated in the slicing rule defining that slice.); configure an internal component of the machine learning model to process the one or more features (Raz, [col. 4, lines 40-49] “The feature values may be utilized as an input for the machine learning model, as an input for the predictor, or the like. In some exemplary embodiments, the mitigating action may comprise obtaining an additional dataset and retraining the predictor therewith. Additionally or alternatively, the mitigating action may comprise changing the architecture of the model used to train the predictor, such as modifying an architecture of a network-based model, modifying the number of layers, the number of nodes in a layer, or the like.”); deploy the machine learning model with the updated architecture to predict attribution of one or more network accessible assets to an organization in the plurality of organizations (Raz, [col. 8, lines 66-67 and col. 9, lines 1-6] “On Step 165, as the performance measurement of the predictor is above a threshold, the predictor may be utilized. In some exemplary embodiments, the predictor may be utilized in order to provide a predicted label for a data instance that is not comprised by the dataset. In some exemplary embodiments, the predictor may be deployed in the field, may be provided as part of an update of a software utilizing the predictor, or the like.”) However, Raz does not teach but Raz in view of Kraning teaches the limitations: different characteristics of network accessible assets (Kraning, Abstract mentions “Network assets may include ephemeral Internet-accessible assets such as IP addresses, domain names, digital certificates, and cloud infrastructure accounts.”), from a repository of data about a plurality of network accessible assets (Kraning, paragraph [0012] mentions “the computer system receives network information associated with the specified entity from a stored network information database. In such embodiments, the computer system identifies the network asset by processing the response data and the network information from the network information database.”) known attributions to one or more organizations of a plurality of organizations (Kraning, Abstract mentions “Response data is received from one or more network systems connected to the computer network and processed to identify one or more network assets associated with an entity such as an enterprise organization.”); organization attribution predictions (Kraning, paragraph [0126] mentions “In some embodiments, machine learning may be applied to identify network assets associated with an entity.}. evaluate the one or more network accessible assets for security risks based on exposure to the Internet (Kraning, paragraph [0226] “In some embodiments, a view may include a displayed listing of network-facing assets representing access points of the combined attack surface. The listing may include details associated with each asset including asset identifier, asset type, asset behavioral characteristics, asset risk assessment, asset security/risk history, communications logs associated with the asset, ownership/responsibility for the asset (e.g., an entity identifier), etc.”). Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having a combination of Raz and Kraning before them, to incorporate the use of a repository containing data about network-accessible assets with known organizational attributions, as taught by Kraning, into the slice-based model evaluation system of Raz. One would have been motivated to make such a combination in order to enable evaluation of an asset attribution model using real-world internet asset metadata where ground-truth ownership is available, thereby supporting meaningful accuracy assessment across data slices and enabling targeted model refinement based on attribution performance. Regarding claim 18, Raz in view of Kraning, as outlined above, all the elements of claim 15, therefore is rejected for the same reasons as those presented for claim 15, mutatis mutandis. Raz in view of Kraning further teaches: update at least one of a type of the machine learning model, parameters of the machine learning model, hyperparameters of the machine learning model, and a training method for the machine learning model based (Raz, [col. 9, lines 36 – 51] mentions “the predictor may be trained based on a machine learning model. In that embodiment, the mitigating action may comprise changing the network architecture, the algorithm utilized to train the network, or the like (176). In some exemplary embodiments, layers may be added to the ANN, a layer may be removed from the ANN, a node may be added to the ANN, connectivity between nodes or layers may be modified, or the like. Additionally or alternatively, the action may comprise changing the machine learning algorithm to a different machine learning algorithm. As an example, the predictor may be trained based on a decision tree classifier. The mitigating action may comprise changing the machine learning algorithm into a random forest classifier and retraining the predictor, changing a machine learning algorithm into a rule based logic,”), at least in part, on the determination that accuracy of the machine learning model for at least a first of the obtained data slices fails the accuracy criterion (Raz, [col. 4, lines 15-17] mentions “In case that the performance measurement of the predictor over the dataset is below a threshold, a mitigating action may be performed.” [col. 7, lines 50-55] further mentions “The performance measurement may be computed based on the number of instances, based on the number of instances for which a correct prediction was provided, or the like. In some exemplary embodiments, the performance measurement may be based on, for example, F1 score, Accuracy, R-squared, RSME, or the like.”). Regarding claim 19, Raz in view of Kraning, as outlined above, all the elements of claim 15, therefore is rejected for the same reasons as those presented for claim 15, mutatis mutandis. Raz in view of Kraning further teaches: rules for obtaining at least one of assertion testing-based data slices, regression testing-based data slices (Raz, [col. 6, lines 23-28] mentions “Additionally or alternatively, the predictor may be trained based on the dataset by utilizing algorithms such as but not limited to Linear Regression, Logistic Regression, Classification and Regression Tree (CART), Naïve Bayes, K-Nearest Neighbors (KNN), K-means, Principal Component Analysis (PCA), or the like)”), location-based data slices, and organization-based data slices (Kraning, Abstract mentions “ Response data is received from one or more network systems connected to the computer network and processed to identify one or more network assets associated with an entity such as an enterprise organization.” paragraph [0133] further mentions “asset data may include… metadata associated with a network asset.”). Regarding claim 20, Raz in view of Kraning, as outlined above, all the elements of claim 15, therefore is rejected for the same reasons as those presented for claim 15, mutatis mutandis. Raz in view of Kraning further teaches: update the repository with additional data about network accessible assets with additional data obtained from ongoing network scanning (Kraning, paragraph [0134] mentions “continually update a global record of the network assets associated with a given entity. In other words, the asset data stored in the network asset database 416 may be continually updated as new scans of the one or more addresses are performed.” [0137] further mentions “ asset discovery and attribution 702 may include scanning the Internet to discover the one or more network assets associated with the entity, attributing responsibility for and/or ownership of the identified network assets to certain related entities, and building a global record of the network assets based on the scanning.”). Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, having a combination of Raz and Kraning before them, to update the repository of asset metadata with additional data obtained from ongoing network scanning, as taught by Kraning. Kraning expressly discloses collecting and enriching asset metadata through continuous external scanning of internet-accessible infrastructure. One would have been motivated to incorporate this capability in into the repository used by the slice-based evaluation system of Raz in order to maintain and up-to-date and comprehensive dataset for accurate model evaluation and attribution performance, particularly as network environments and asset exposures change over time. Regarding claim 21, Raz in view of Kraning, as outlined above, all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1, mutatis mutandis. Raz in view of Kraning further teaches: wherein engineering the one or more features further comprises determining a number of the one or more features based, at least in part, on the accuracy of the asset attribution model on the first subset of the plurality of assets, wherein the number of the one or more features is determined to be higher for a lower accuracy of the asset attribution model and lower for a higher accuracy of the asset attribution model (Raz, [col. 4, lines 9-17] “ A plurality of performance measurements may be computed by computing…In case that the performance measurement of the predictor over the dataset is below a threshold, a mitigating action may be performed.” [col. 4, lines 45-54] “ the mitigating action may comprise changing the architecture of the model used to train the predictor, such as modifying an architecture of a network-based model, modifying the number of layers, the number of nodes in a layer, or the like…the mitigating action may comprise feature engineering in order to change a feature, add a feature, remove a feature, or the like.” – under the broadest reasonable interpretation, Raz teaches determining the extent of feature engineering and architecture modification (e.g., number of layers, number of nodes, number of engineered features) based on model accuracy results. When performance falls below a threshold, more modification/features are added, when performance is acceptable, less modification occurs. This corresponds to determining that the number of features is higher for lower model accuracy for lower model accuracy and lower for higher model accuracy.). Regarding claim 22, Raz in view of Kraning, as outlined above, all the elements of claim 9, therefore is rejected for the same reasons as those presented for claim 9, mutatis mutandis. Raz in view of Kraning further teaches: wherein the instructions to engineer the one or more features further comprise instructions to determine a number of the one or more features based, at least in part, on accuracy of the asset attribution model in predicting attributed organizations, wherein the number of the one or more features is determined to be higher for a lower accuracy of the asset attribution model and lower for a higher accuracy of the asset attribution model (Raz, [col. 4, lines 9-17] “ A plurality of performance measurements may be computed by computing…In case that the performance measurement of the predictor over the dataset is below a threshold, a mitigating action may be performed.” [col. 4, lines 45-54] “ the mitigating action may comprise changing the architecture of the model used to train the predictor, such as modifying an architecture of a network-based model, modifying the number of layers, the number of nodes in a layer, or the like…the mitigating action may comprise feature engineering in order to change a feature, add a feature, remove a feature, or the like.” – under the broadest reasonable interpretation, Raz teaches determining the extent of feature engineering and architecture modification (e.g., number of layers, number of nodes, number of engineered features) based on model accuracy results. When performance falls below a threshold, more modification/features are added, when performance is acceptable, less modification occurs. This corresponds to determining that the number of features is higher for lower model accuracy for lower model accuracy and lower for higher model accuracy.). Regarding claim 23, Raz in view of Kraning, as outlined above, all the elements of claim 15, therefore is rejected for the same reasons as those presented for claim 15, mutatis mutandis. Raz in view of Kraning further teaches: wherein the instructions to engineer the one or more features comprise instructions executable by the processor to cause the apparatus to determine a number of the one or more features based, at least in part, on accuracy of the machine learning model for the first of the obtained data slices, wherein the number of the one or more features is determined to be higher for a lower accuracy of the machine learning model and lower for a higher accuracy of the machine learning model (Raz, [col. 4, lines 9-17] “ A plurality of performance measurements may be computed by computing…In case that the performance measurement of the predictor over the dataset is below a threshold, a mitigating action may be performed.” [col. 4, lines 45-54] “ the mitigating action may comprise changing the architecture of the model used to train the predictor, such as modifying an architecture of a network-based model, modifying the number of layers, the number of nodes in a layer, or the like…the mitigating action may comprise feature engineering in order to change a feature, add a feature, remove a feature, or the like.” – under the broadest reasonable interpretation, Raz teaches determining the extent of feature engineering and architecture modification (e.g., number of layers, number of nodes, number of engineered features) based on model accuracy results. When performance falls below a threshold, more modification/features are added, when performance is acceptable, less modification occurs. This corresponds to determining that the number of features is higher for lower model accuracy for lower model accuracy and lower for higher model accuracy.). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Daravanh Phakousonh whose telephone number is (571)272-6324. The examiner can normally be reached Mon - Thurs 7 AM - 5 PM, Every other Friday 7 AM - 4PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li B Zhen can be reached at 571-272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Daravanh Phakousonh/ Examiner, Art Unit 2121 /Li B. Zhen/ Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Jul 21, 2022
Application Filed
Jul 22, 2025
Non-Final Rejection — §101, §103
Oct 14, 2025
Interview Requested
Oct 21, 2025
Examiner Interview Summary
Oct 21, 2025
Applicant Interview (Telephonic)
Oct 27, 2025
Response Filed
Jan 08, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572821
ACCURACY PRIOR AND DIVERSITY PRIOR BASED FUTURE PREDICTION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+100.0%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month