Prosecution Insights
Last updated: April 19, 2026
Application No. 17/699,025

METADATA-DRIVEN FEATURE STORE FOR MACHINE LEARNING SYSTEMS

Final Rejection §101§103
Filed
Mar 18, 2022
Examiner
KASSIM, IMAD MUTEE
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
C3 AI Inc.
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
116 granted / 160 resolved
+17.5% vs TC avg
Strong +34% interview lift
Without
With
+33.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
23 currently pending
Career history
183
Total Applications
across all art units

Statute-Specific Performance

§101
22.6%
-17.4% vs TC avg
§103
44.2%
+4.2% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 160 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see Remarks, filed 10/24/2025 have been fully considered: Regarding 101 abstract idea, applicant argues, “the independent claims are amended to recite additional eligible subject matter. For example, independent claim 1 now recites "using a communication network to obtain data from one or more data servers" and "causing the processing device to use the metadata to perform the one or more transformations on the obtained data to generate the one or more features or feature sets." Applicant respectfully submits that such operations or transformations cannot practically be performed mentally by a human.”. Examiner disagrees. The amendments of utilizing a computer to implement an abstract idea do not integrate the claim into a practical application nor improve computer functionality. The claim still claims collecting, organizing and transforming data to generate derived information, which is an abstract idea under mental process. Please see full rejection below. Claim Rejections - 35 USC § 101 is maintained. Regarding applicant's arguments filed with respect to the prior art rejections have been fully considered but they are moot. Applicant has amended the claims to recite new combinations of limitations. Please see below for new grounds of rejection, necessitated by Amendment. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims’ subject matter eligibility will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50-57 (January 7, 2019) (“2019 PEG”). With respect to claim 1. Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes—claim 1 recites a method, which is a process. Step 2A, prong one: Does the claim recite an abstract idea, law of nature or natural phenomenon? Yes—the limitations identified below each, under its broadest reasonable interpretation, covers mental processes abstract idea grouping (concepts performed in the human mind (including an observation, evaluation, judgment, opinion)), see MPEP 2106.04(a)(2), subsection III and the 2019 PEG, but for the recitation of generic computer components: “identifying one or more transformations to be applied to the data in order to generate one or more features or feature sets”: (Mental processes- concept of observation and evaluation in the mind or using pen and paper). “generate metadata identifying the one or more features or feature sets and the one or more transformations.”: (Mental processes- concept of observation and evaluation in the mind or using pen and paper). “perform the one or more transformations on the obtained data to generate the one or more features or feature sets.”: (Mental processes- concept of observation and evaluation in the mind or using pen and paper). These limitations fall within the mental process grouping of abstract ideas that can be performed in the human mind, or by a human with pencil and paper. Thus, Claim 1 recites an abstract idea. Step 2A, prong two: The claim includes the additional elements below: “using a communication network to obtain data from one or more data servers”: ;” (Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g))). “storing the one or more determined features or feature sets in a feature store;” (Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g))). “outputting at least some of the one or more determined features or feature sets or data associated with the at least some of the one or more determined features or feature sets from the feature store to at least one machine learning model” (Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g))). “causing a processing device to”… and “causing the processing device to”: Merely reciting the words “apply it” (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f); The above additional elements do not integrate the judicial exception into a practical application because providing and receiving data involves the mere gathering of data, which is insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g). The generic computer components in these steps are recited at a high-level of generality (i.e., as a generic computer component performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B: The additional elements do not amount to significantly more because there are no additional limitations beyond the mental processes identified above. The limitation treated above, are directed to the well-understood, routine, and conventional activity of storing and retrieving information in memory. See MPEP § 2106.05(d)(II); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). It also includes limitations that Merely reciting the words “apply it” (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). The additional element is insignificant application, which is similar to examples of activities that the courts have found to be insignificant extra-solution activity, in accordance with MPEP 2106.05(g), Insignificant Extra-Solution Activity, such as printing or downloading generated menus, Ameranth, 842 F.3d at 1241-42, 120 USPQ2d at 1854-55. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible. Regarding Claim 2, Step 2A Prong 1: The claim recites “the one or more transformations to be applied represent one or more transformations to be applied to experimental data in order to generate the one or more features or feature sets for the experimental data; and the at least some of the one or more determined features or feature sets or the data associated with the at least some of the one or more determined features or feature sets are output for evaluation.”: Mental processes- concept of observation and evaluation in the mind or using pen and paper. Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Regarding Claim 3, Step 2A Prong 1: The claim recites “the one or more transformations comprise one or more first transformations; the metadata comprises first metadata; and the method further comprises: identifying one or more second transformations to be applied in order to generate the one or more features or feature sets for production data; generating second metadata identifying the one or more features or feature sets and the one or more second transformations; using the second metadata to determine the one or more features or feature sets for the production data.”: Mental processes- concept of observation and evaluation in the mind or using pen and paper. Step 2A Prong 2, Step 2B: The claim also recite, “storing the one or more second determined features or feature sets in the feature store; and outputting at least some of the one or more second determined features or feature sets or data associated with the at least some of the one or more second determined features or feature sets for inferencing.”: Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)). This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Regarding Claim 4, Step 2A Prong 2, Step 2B: “receiving a query for determined features or feature sets stored in the feature store; wherein outputting the at least some of the one or more determined features or feature sets or the data associated with the at least some of the one or more determined features or feature sets comprises outputting any determined features or feature sets matching one or more criteria specified in the query or data associated with any determined features or feature sets matching the one or more criteria specified in the query”: Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)). This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Regarding Claim 5, Step 2A Prong 1: The claim recites “determining whether portions of the specified data are associated with different time intervals; and resampling at least one portion of the specified data to time intervals associated with at least one other portion of the specified data..”: Mental processes- concept of observation and evaluation in the mind or using pen and paper. Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Regarding Claim 6, Step 2A Prong 1: The claim recites “using the metadata to determine the one or more features or feature sets for the specified data comprises performing one or more in-memory transformations.”: Mental processes- concept of observation and evaluation in the mind or using pen and paper. Step 2A Prong 2, Step 2B: “storing the one or more determined features or feature sets comprises storing the one or more determined features or feature sets in a feature cache of the feature store”: Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)). This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Regarding Claim 7, Step 2A Prong 1: The claim recites “identifying one or more additional transformations to be applied in order to generate one or more additional features or feature sets; generating additional metadata identifying the one or more additional features or feature sets and the one or more additional transformations; using the additional metadata to determine the one or more additional features or feature sets for additional specified data;”: Mental processes- concept of observation and evaluation in the mind or using pen and paper. Step 2A Prong 2, Step 2B: “storing the one or more determined additional features or feature sets in the feature store; and outputting at least some of the one or more additional determined features or feature sets or data associated with the at least some of the one or more additional determined features or feature sets from the feature store.”: Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)). This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Regarding Claim 8, Step 2A Prong 1: The claim recites “generating a snapshot of a specified one of the one or more features or feature sets in the feature store identified as satisfying a query; and generating a metadata entry identifying the snapshot in the metadata associated with the specified feature or feature set.”: Mental processes- concept of observation and evaluation in the mind or using pen and paper. Step 2A Prong 2, Step 2B: “storing the snapshot in the feature store”: Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)). This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Regarding Claim 9, Step 2A Prong 1: The claim recites “identifying an access control set associated with a portion of the specified data used to produce a specified one of the one or more features or feature sets; and identifying an access control setting for the specified feature or feature set in the feature store, the access control setting being as restrictive as or more restrictive than all settings in the access control set.”: Mental processes- concept of observation and evaluation in the mind or using pen and paper. Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Regarding Claim 10, Claim 10 is directed to an apparatus, which is directed to a machine, one of the statutory categories. Claim 10 recites: an apparatus comprising: at least one processing device to perform a process that has limitations similar to the limitations of claim 1. Thus, claim 10 is rejected with the same rationale applied against claim 1. As performing a mental process or abstract idea on a generic computer component cannot integrate the abstract idea into a practical application and cannot provide an inventive concept, claim 10 remains subject matter ineligible. Regarding claims 11-16, claims 11-16 are dependent to claim 10 and recites limitations that are similar to the limitations recited in claims 2-9. Therefore, claims 11-16 are rejected with the same rationale applied against claims 2-9 above. Regarding Claim 17, Claim 17 is directed to an non-transitory computer readable medium storing computer readable program code that when executed causes one or more processors, which is directed to a machine, one of the statutory categories. Claim 17 recites: non-transitory computer readable medium storing computer readable program code that when executed causes one or more processors to perform a process that has limitations similar to the limitations of claim 1. Thus, claim 17 is rejected with the same rationale applied against claim 1. As performing a mental process or abstract idea on a generic computer component cannot integrate the abstract idea into a practical application and cannot provide an inventive concept, claim 17 remains subject matter ineligible. Regarding claims 18-23, claims 18-23 are dependent to claim 17 and recites limitations that are similar to the limitations recited in claims 2-7. Therefore, claims 18-23 are rejected with the same rationale applied against claims 2-7 above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 20210266145 A1) in view of Hur et al. (US 20240273413 A1). Regarding claim 1. Chen discloses a method comprising: using a communication network to obtain data from one or more data servers (see ¶166, “visual search queries containing user-defined vision functions (UVFs) 1104a-c are received from end-users 1102 of visual fog 1100. A UVF 1104 received from an end-user 1102 is first processed by a compiler 1110 in order to generate a vision dataflow graph for executing the UVF.”, also see ¶ 167, “the distributed runtime environment 1120 may perform the described visual data processing (e.g., initial pre-processing and/or UVF processing) by scheduling or distributing vision workloads across the available fog devices or resources 1140 (e.g., cloud servers 1140a, cameras 1140b, mobile devices, IoT devices, gateways, and/or other fog/edge devices).”, also see ¶ 168, “FIGS. 12A-B illustrate another example visual fog architecture 1200. In the illustrated embodiment, visual fog architecture 1200 includes a network of fog devices 1216, including cameras or visual sensors 1216a, gateways 1216b, and cloud servers 1216c.”, also see ¶ 101, “Communications in the cellular network 260, for instance, may be enhanced by systems that offload data, extend communications to more remote devices, or both. The LPWA network 262 may include systems that perform non-Internet protocol (IP) to IP interconnections, addressing, and routing. Further, each of the IoT devices 204 may include the appropriate transceiver for wide area communications with that device. Further, each IoT device 204 may include other transceivers for communications using additional protocols and frequencies.”); identifying one or more transformations to be applied to the data in order to generate one or more features or feature sets (see ¶ 175, “Identifying the same object or person across different images or video streams, however, can be challenging. In some embodiments, for example, this task may require feature extraction to be performed across multiple cameras. The respective features extracted from each camera often differ, however, and not all cameras have the same field of view, and thus certain features may be successfully extracted from some cameras but not others. Accordingly, in some embodiments, a user may implement a UVF to define how visual equivalency or “equal” operations are to be performed on visual data. In some embodiments, for example, a UVF for visual equivalency may define objects as “equal” if their feature vectors are “close enough” to each other”, also see ¶ 269, “When a user creates an analytic image using VCL 5100, the analytic image schema is automatically set using the parameters described above in TABLE 1. VCL 5100 then creates a layer of abstraction with function calls of TileDB 5102 (e.g., the array-database manager used in the illustrated embodiment) combined with specialized transformation operations to provide an interface to the analytic image”, also see ¶¶ 174-177, i.e. one or more transformation == special transformation operation, UVF to define equivalency and generate features/sets == feature vector, feature extraction, also see ¶ 230-235, transformation to generate features); causing a processing device to generate metadata identifying the one or more features or feature sets and the one or more transformations (see ¶ 170, “The resulting visual data or metadata 1217 generated by the distributed runtime environment 1214 may then be stored in a database or data storage 1218.”, also see ¶ 177, “the visual processing dataflow 1308 for a particular query 1302 may leverage existing visual metadata that has already been generated and stored on data storage 1314. In other cases, however, further processing may be required to respond to the query 1302, and thus the visual processing dataflow 1308 may leverage the offline analytics framework 1312 to perform additional processing. In either case, the visual processing pipeline or dataflow 1308 generated by compiler 1304 is executed by the runtime environment in order to generate a response to the visual query 1302” , also see ¶¶ 174-176); causing the processing device to perform the one or more transformations on the obtained data to generate the one or more features or feature sets (see ¶ 177, “both the online and offline analytics frameworks 1310, 1312 may store their resulting visual metadata on data storage 1314 for use in responding to subsequent visual search queries.”, also see ¶ 235, “the unified API could be used to retrieve and/or combine visual metadata and the original visual data from different storage locations. The unified API may also allow certain types of processing to be performed on visual data before it is returned to the requesting user. Further, the unified API may allow users to explicitly recognize visual entities such as images, feature vectors, and videos, and may simplify access to those visual entities based on their relationship with each other and with other entities associated with a particular vision application.”, also see ¶¶ 174-176); storing the one or more determined features or feature sets in a feature store (see ¶ 227, “the resulting stream of visual data and/or metadata 1707 may then be stored in data storage 1706 for responding to subsequent visual search queries or UVFs.”, also see ¶ 233, “a visual data storage architecture 1800 designed to provide efficient access to visual data and eliminate the deficiencies of existing storage solutions used for visual data processing. In particular, storage architecture 1800 provides efficient metadata storage for searching visual data, as well as analysis-friendly formats for storing visual data.”, also see ¶ 230-235); and outputting at least some of the one or more determined features or feature sets or data associated with the at least some of the one or more determined features or feature sets from the feature store to at least one machine learning model (see ¶ 227, “the output 1713 may include, or may be derived from, a filtered stream of visual data and/or metadata 1707 generated by execution of the UVFs 1709.”, also see ¶ 230, “a CNN is first trained for a specific classification task using a set of images whose object classes or features have been labeled, and the CNN can then be used to determine the probability of whether other images contain the respective object classes.”, also see ¶ 231-235). Chen do not specifically teach causing the processing device to use the metadata to perform the one or more transformations on the obtained data to generate the one or more features or feature sets. Hur teaches causing the processing device to use the metadata to perform the one or more transformations on the obtained data to generate the one or more features or feature sets (see ¶ 84, “When a transformation condition is set for the vectorizer function, and the transformation condition is satisfied, the vectorizing unit 170 may transform the feature of the medical data with a vectorizer function. According to the real-time data transformation method described with reference to FIG. 6, the vectorizing unit 170 checks the features written in the feature data table in real time, looks up the feature type by referring to the feature metadata store 110, and checks the vectorizer function and transformation condition corresponding to the feature type in the vectorizer store 130. The vectorizing unit 170 puts the feature into a queue in which a vectorizer function and transformation condition are set, and when the transformation condition is satisfied, the vectorizing unit 170 may transform the feature with the vectorizer function and store the transformed feature in the transformation data store 190.”). Both Chen and Hur pertain to the problem of machine learning data features, thus being analogous. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine Chen and Hur to teach the above limitations. The motivation for doing so would be “The input data structure of the artificial intelligence model may vary depending on the training performance of the artificial intelligence model, and in the initial training phase, the input data is generated by applying all applicable vectorizer functions to each feature, and then the set of vectorizer functions for the features may be optimized by gradually selecting the transformed data that affect the artificial intelligence model's prediction results and the vectorizer functions that generate the transformed data. In other words, the prediction performance of the artificial intelligence model depends on the training data, and according to the complex and multifaceted natures of the medical data, it is difficult to determine which vectorization needs to be applied to ensure optimal prediction performance.” (see Hu ¶ 49). Regarding claim 2. Chen and Hur teaches the method of Claim 1, Chen further discloses wherein: the one or more transformations to be applied represent one or more transformations to be applied to experimental data in order to generate the one or more features or feature sets for the experimental data (see ¶ 175, “Identifying the same object or person across different images or video streams, however, can be challenging. In some embodiments, for example, this task may require feature extraction to be performed across multiple cameras. The respective features extracted from each camera often differ, however, and not all cameras have the same field of view, and thus certain features may be successfully extracted from some cameras but not others. Accordingly, in some embodiments, a user may implement a UVF to define how visual equivalency or “equal” operations are to be performed on visual data. In some embodiments, for example, a UVF for visual equivalency may define objects as “equal” if their feature vectors are “close enough” to each other”, also see ¶ 269, “When a user creates an analytic image using VCL 5100, the analytic image schema is automatically set using the parameters described above in TABLE 1. VCL 5100 then creates a layer of abstraction with function calls of TileDB 5102 (e.g., the array-database manager used in the illustrated embodiment) combined with specialized transformation operations to provide an interface to the analytic image”, also see ¶ 174-177, 227, 230-235); and the at least some of the one or more determined features or feature sets or the data associated with the at least some of the one or more determined features or feature sets are output for evaluation (see ¶ 175, “Identifying the same object or person across different images or video streams, however, can be challenging. In some embodiments, for example, this task may require feature extraction to be performed across multiple cameras. The respective features extracted from each camera often differ, however, and not all cameras have the same field of view, and thus certain features may be successfully extracted from some cameras but not others. Accordingly, in some embodiments, a user may implement a UVF to define how visual equivalency or “equal” operations are to be performed on visual data. In some embodiments, for example, a UVF for visual equivalency may define objects as “equal” if their feature vectors are “close enough” to each other”, also see ¶ 269, “When a user creates an analytic image using VCL 5100, the analytic image schema is automatically set using the parameters described above in TABLE 1. VCL 5100 then creates a layer of abstraction with function calls of TileDB 5102 (e.g., the array-database manager used in the illustrated embodiment) combined with specialized transformation operations to provide an interface to the analytic image”, also see ¶ 174-177, 227, 230-235). Regarding claim 3. Chen and Hur teaches the method of Claim 2, Chen further discloses wherein: the one or more transformations comprise one or more first transformations; the metadata comprises first metadata (see ¶ 174-177, 227, 230-235 and 269, transformation e.g. visual data in order to extract features); and the method further comprises: identifying one or more second transformations to be applied in order to generate the one or more features or feature sets for production data (see ¶ 175, “Identifying the same object or person across different images or video streams, however, can be challenging. In some embodiments, for example, this task may require feature extraction to be performed across multiple cameras. The respective features extracted from each camera often differ, however, and not all cameras have the same field of view, and thus certain features may be successfully extracted from some cameras but not others. Accordingly, in some embodiments, a user may implement a UVF to define how visual equivalency or “equal” operations are to be performed on visual data. In some embodiments, for example, a UVF for visual equivalency may define objects as “equal” if their feature vectors are “close enough” to each other”, also see ¶ 269, “When a user creates an analytic image using VCL 5100, the analytic image schema is automatically set using the parameters described above in TABLE 1. VCL 5100 then creates a layer of abstraction with function calls of TileDB 5102 (e.g., the array-database manager used in the illustrated embodiment) combined with specialized transformation operations to provide an interface to the analytic image”, also see ¶ 174-177, 227, 230-235 and 269); generating second metadata identifying the one or more features or feature sets and the one or more second transformations (see ¶ 170, “The resulting visual data or metadata 1217 generated by the distributed runtime environment 1214 may then be stored in a database or data storage 1218.”, also see ¶ 177, “the visual processing dataflow 1308 for a particular query 1302 may leverage existing visual metadata that has already been generated and stored on data storage 1314. In other cases, however, further processing may be required to respond to the query 1302, and thus the visual processing dataflow 1308 may leverage the offline analytics framework 1312 to perform additional processing. In either case, the visual processing pipeline or dataflow 1308 generated by compiler 1304 is executed by the runtime environment in order to generate a response to the visual query 1302”, also see ¶ 174-177, 227, 230-235 and 269); using the second metadata to determine the one or more features or feature sets for the production data (see ¶ 177, “both the online and offline analytics frameworks 1310, 1312 may store their resulting visual metadata on data storage 1314 for use in responding to subsequent visual search queries.”, also see ¶ 235, “the unified API could be used to retrieve and/or combine visual metadata and the original visual data from different storage locations. The unified API may also allow certain types of processing to be performed on visual data before it is returned to the requesting user. Further, the unified API may allow users to explicitly recognize visual entities such as images, feature vectors, and videos, and may simplify access to those visual entities based on their relationship with each other and with other entities associated with a particular vision application.”, also see ¶ 174-177, 227, 230-235 and 269); storing the one or more second determined features or feature sets in the feature store (see ¶ 227, “the resulting stream of visual data and/or metadata 1707 may then be stored in data storage 1706 for responding to subsequent visual search queries or UVFs.”, also see ¶ 233, “a visual data storage architecture 1800 designed to provide efficient access to visual data and eliminate the deficiencies of existing storage solutions used for visual data processing. In particular, storage architecture 1800 provides efficient metadata storage for searching visual data, as well as analysis-friendly formats for storing visual data.”, also see ¶ 174-177, 227, 230-235 and 269); and outputting at least some of the one or more second determined features or feature sets or data associated with the at least some of the one or more second determined features or feature sets for inferencing (see ¶ 227, “the output 1713 may include, or may be derived from, a filtered stream of visual data and/or metadata 1707 generated by execution of the UVFs 1709.”, also see ¶ 230, “a CNN is first trained for a specific classification task using a set of images whose object classes or features have been labeled, and the CNN can then be used to determine the probability of whether other images contain the respective object classes.”, also see ¶ also see ¶ 174-177, 190-195, 227, 230-235 ). Regarding claim 4. Chen and Hur teaches the method of Claim 1, Chen further discloses further comprising: receiving a query for determined features or feature sets stored in the feature store (see ¶ 170, “The resulting visual data or metadata 1217 generated by the distributed runtime environment 1214 may then be stored in a database or data storage 1218.”, also see ¶ 177, “the visual processing dataflow 1308 for a particular query 1302 may leverage existing visual metadata that has already been generated and stored on data storage 1314. In other cases, however, further processing may be required to respond to the query 1302, and thus the visual processing dataflow 1308 may leverage the offline analytics framework 1312 to perform additional processing. In either case, the visual processing pipeline or dataflow 1308 generated by compiler 1304 is executed by the runtime environment in order to generate a response to the visual query 1302” , also see ¶¶ 174-176, 227, 230-235); wherein outputting the at least some of the one or more determined features or feature sets or the data associated with the at least some of the one or more determined features or feature sets comprises outputting any determined features or feature sets matching one or more criteria specified in the query or data associated with any determined features or feature sets matching the one or more criteria specified in the query (see ¶ 170, “The resulting visual data or metadata 1217 generated by the distributed runtime environment 1214 may then be stored in a database or data storage 1218.”, also see ¶ 177, “the visual processing dataflow 1308 for a particular query 1302 may leverage existing visual metadata that has already been generated and stored on data storage 1314. In other cases, however, further processing may be required to respond to the query 1302, and thus the visual processing dataflow 1308 may leverage the offline analytics framework 1312 to perform additional processing. In either case, the visual processing pipeline or dataflow 1308 generated by compiler 1304 is executed by the runtime environment in order to generate a response to the visual query 1302”, also see ¶ 175, “a user may implement a UVF to define how visual equivalency or “equal” operations are to be performed on visual data. In some embodiments, for example, a UVF for visual equivalency may define objects as “equal” if their feature vectors are “close enough” to each other, meaning the feature vectors must be sufficiently similar but do not have to be an exact match. Further, if feature vectors from different cameras are missing certain features, only the partial features will be compared and the “close enough” definition will be scaled accordingly.” , also see ¶¶ 174-176, 227, 230-235,). Regarding claim 5. Chen and Hur teaches the method of Claim 1, Chen further discloses further comprising: determining whether portions of the specified data are associated with different time intervals; and resampling at least one portion of the specified data to time intervals associated with at least one other portion of the specified data (see ¶ 536, “he incident is then given a “name” for identification, routing, and/or networking purposes. In some embodiments, for example, the incident name may be derived using an arbitrary combination of information associated with the incident, such as location, time, event, type of incident, priority/importance/fatalities, image/video captured of the event, and so forth”, also see ¶ 538, “a user might only know the approximate time and place of an incident for purposes of querying the network, and thus the network can disseminate the query to the relevant data stores, and those with relevant data can then reply.”, also see ¶ 539, 595-599). Regarding claim 6. Chen and Hur teaches the method of Claim 1, Chen further discloses wherein: using the metadata to determine the one or more features or feature sets for the specified data comprises performing one or more in-memory transformations; and storing the one or more determined features or feature sets comprises storing the one or more determined features or feature sets in a feature cache of the feature store (see ¶¶ 190-195, “Cached visual analytics can be used to optimize visual processing using cached workflows, similar to incremental processing. For example, based on cached information regarding particular visual streams that have already been obtained and processed, along with the type of processing or workloads performed on those streams, subsequent vision processing dataflows may omit certain processing steps that have previously been performed and whose results have been cached. For example, a visual analytics application involves a number of primitive vision operations. The volume of computation can be reduced, however, by caching visual analytics results and reusing them for subsequent operations when possible. For example, when executing a visual analytics application, cached visual metadata resulting from prior processing can be searched to avoid duplicative computation. In some embodiments, for example, cached visual analytics may be implemented as follows: [0191] 1. Each primitive vision operation is tagged or labeled using a cache tag; [0192] 2. For each instance or stream of visual data (e.g., each stored video), any corresponding visual metadata that has already been generated is stored in a metadata database or cache; [0193] 3. If there is a cache tag hit for a particular primitive vision operation with respect to a particular instance or stream of visual data, then the particular primitive vision operation can be omitted and instead the existing visual metadata can be used; and [0194] 4. If there is a cache tag miss, however, the particular primitive vision operation is executed and the resulting metadata is cached in the metadata database for subsequent use.”, also ¶ 138-139, 174-177). Regarding claim 7. Chen and Hur teaches the method of Claim 1, Chen further discloses further comprising: identifying one or more additional transformations to be applied in order to generate one or more additional features or feature sets (see ¶ 175, “Identifying the same object or person across different images or video streams, however, can be challenging. In some embodiments, for example, this task may require feature extraction to be performed across multiple cameras. The respective features extracted from each camera often differ, however, and not all cameras have the same field of view, and thus certain features may be successfully extracted from some cameras but not others. Accordingly, in some embodiments, a user may implement a UVF to define how visual equivalency or “equal” operations are to be performed on visual data. In some embodiments, for example, a UVF for visual equivalency may define objects as “equal” if their feature vectors are “close enough” to each other”, also see ¶ 269, “When a user creates an analytic image using VCL 5100, the analytic image schema is automatically set using the parameters described above in TABLE 1. VCL 5100 then creates a layer of abstraction with function calls of TileDB 5102 (e.g., the array-database manager used in the illustrated embodiment) combined with specialized transformation operations to provide an interface to the analytic image”, also see ¶ 174-177, 227, 230-235 and 269); generating additional metadata identifying the one or more additional features or feature sets and the one or more additional transformations (see ¶ 170, “The resulting visual data or metadata 1217 generated by the distributed runtime environment 1214 may then be stored in a database or data storage 1218.”, also see ¶ 177, “the visual processing dataflow 1308 for a particular query 1302 may leverage existing visual metadata that has already been generated and stored on data storage 1314. In other cases, however, further processing may be required to respond to the query 1302, and thus the visual processing dataflow 1308 may leverage the offline analytics framework 1312 to perform additional processing. In either case, the visual processing pipeline or dataflow 1308 generated by compiler 1304 is executed by the runtime environment in order to generate a response to the visual query 1302”, also see ¶ 174-177, 227, 230-235 and 269); using the additional metadata to determine the one or more additional features or feature sets for additional specified data (see ¶ 177, “both the online and offline analytics frameworks 1310, 1312 may store their resulting visual metadata on data storage 1314 for use in responding to subsequent visual search queries.”, also see ¶ 235, “the unified API could be used to retrieve and/or combine visual metadata and the original visual data from different storage locations. The unified API may also allow certain types of processing to be performed on visual data before it is returned to the requesting user. Further, the unified API may allow users to explicitly recognize visual entities such as images, feature vectors, and videos, and may simplify access to those visual entities based on their relationship with each other and with other entities associated with a particular vision application.”, also see ¶ 174-177, 227, 230-235 and 269); storing the one or more determined additional features or feature sets in the feature store (see ¶ 227, “the resulting stream of visual data and/or metadata 1707 may then be stored in data storage 1706 for responding to subsequent visual search queries or UVFs.”, also see ¶ 233, “a visual data storage architecture 1800 designed to provide efficient access to visual data and eliminate the deficiencies of existing storage solutions used for visual data processing. In particular, storage architecture 1800 provides efficient metadata storage for searching visual data, as well as analysis-friendly formats for storing visual data.”, also see ¶ 174-177, 227, 230-235 and 269); and outputting at least some of the one or more additional determined features or feature sets or data associated with the at least some of the one or more additional determined features or feature sets from the feature store (see ¶ 227, “the output 1713 may include, or may be derived from, a filtered stream of visual data and/or metadata 1707 generated by execution of the UVFs 1709.”, also see ¶ 230, “a CNN is first trained for a specific classification task using a set of images whose object classes or features have been labeled, and the CNN can then be used to determine the probability of whether other images contain the respective object classes.”, also see ¶ also see ¶ 174-177, 190-195, 227, 230-235 ). Regarding claim 8. Chen and Hur teaches the method of Claim 1, Chen further discloses further comprising: generating a snapshot of a specified one of the one or more features or feature sets in the feature store identified as satisfying a query; storing the snapshot in the feature store; and generating a metadata entry identifying the snapshot in the metadata associated with the specified feature or feature set (see ¶ 528, “when an interesting event (e.g., anomalous, unusual, rare) occurs, a snapshot of local data is locked (e.g., securely stored) by the subject device that detected the event, thus preventing the data from being overwritten. Further, the subject that detected the event notifies other relevant subjects (e.g., nearby subjects in many cases) in real time to lock their respective counterpart data snapshots…he collection of data and metadata distributed across the respective subject devices is aggregated using visual fog networking and/or information-centric networking (ICN), thus allowing the respective data snapshots to be associated together and properly stored by the devices or nodes in the visual fog paradigm.”, also see ¶ 529, “the central or key evidence associated with an incident is unimpeded by data retention policies, as the relevant subject devices are notified in real time to collect and lock their respective data snapshots. As another example, information-centric networking (ICN) and/or event-based data routing can be leveraged to provide a more efficient approach for collecting, aggregating, and/or routing data.”, also see ¶ also see ¶ 174-177, 190-195, 227, 230-235). Regarding claim 9. Chen and Hur teaches the method of Claim 1, Chen further discloses further comprising: identifying an access control set associated with a portion of the specified data used to produce a specified one of the one or more features or feature sets; and identifying an access control setting for the specified feature or feature set in the feature store, the access control setting being as restrictive as or more restrictive than all settings in the access control set (see ¶ 441, “tenant isolation may be achieved using operating system-imposed resource restrictions, namespace restrictions, and/or process access controls, otherwise known as “containers.” Tenant isolation may further be achieved using virtualization, where a first VM isolates a first tenant from a second tenant of a second VM.”, also see ¶ 442, “certain networks may require a new fog node to be “onboarded” or “commissioned” before the fog node is allowed to access each network (e.g., using the onboarding/commissioning protocols of the Open Connectivity Foundation (OCF) and/or Intel's Secure Device Onboard (SDO) technology).” , also see ¶ 525-529 and 557, permission). Claim 10 recites an apparatus comprising: at least one processing device to perform the method recited in claim 1. Therefore the rejection of claim 1 above applies equally here. Chen also teaches the addition elements of claim 10 not recited in claim 1 comprising at least one processing device (see ¶ 74, ”Edge resources 110 may include any equipment, devices, and/or components deployed or connected near the “edge” of a communication network. In the illustrated embodiment, for example, edge resources 110 include end-user devices 112a,b (e.g., desktops, laptops, mobile devices), Internet-of-Things (IoT) devices 114, and gateways or routers 116, as described further below.”). Claims 11-16 recites an apparatus comprising: at least one processing device to perform the method recited in claims 2-9. Therefore the rejection of claims 2-9 above applies equally here. Claim 17 recites non-transitory computer readable medium storing computer readable program code that when executed causes one or more processors to perform the method recited in claim 1. Therefore the rejection of claim 1 above applies equally here. Chen also teaches the addition elements of claim 17 not recited in claim 1 comprising one or more processors (see ¶ 74, ”Edge resources 110 may include any equipment, devices, and/or components deployed or connected near the “edge” of a communication network. In the illustrated embodiment, for example, edge resources 110 include end-user devices 112a,b (e.g., desktops, laptops, mobile devices), Internet-of-Things (IoT) devices 114, and gateways or routers 116, as described further below.”). Claims 18-23 recites non-transitory computer readable medium storing computer readable program code that when executed causes one or more processors to perform the method recited in claims 2-7. Therefore the rejection of claims 2-7 above applies equally here. Related arts: Chandrahasan et al. (US 20230289649 A1) teaches an automated system for machine learning lineage inference exploits the metadata to find out whether feature transformations are applied when training datasets are derived from the parent datasets, and further what feature transformations are applied. Information of machine learning lineage inference for the feature transformations is persisted in a lineage store. Teague et al. (US 20210141801 A1) teaches the transformation functions applied may be distinguished between those functions targeting a training or test set feature set, targeting a training set feature set and a corresponding test set feature set, or a corresponding function targeting a test set feature set such as may apply a basis derived from a training set feature set stored in a metadata database in an application for preparing structured datasets for machine learning. Ouellet et al. (US 20210110219 A1) teaches the fusing based on meta-data of each of the one or more internal signals and each of the one or more external signals; generate a plurality of features based on one or more valid combinations that match a transformation input, the transformation forming part of a library of transformations; select one or more features from the plurality of features, based on a predictive strength of each feature, to provide a set of selected features. Yang et al. (US 20210097425 A1) teaches Machine learning platform 206 additionally obtains and/or generates feature metadata 218 that includes, but is not limited to, raw feature names, identifiers (IDs), values, locations, descriptions, transformations, and/or other information related to the inputted feature values. Edwards et al. (US 20220269978 A1) teaches the transformation code 214 may include features that can be tagged or stored in the feature store 106 and/or a feature library with metadata 216 to enable discovery and reuse. In certain exemplary implementations, the transformation code 214 may be stored on a code hosting and versioning platform 220 such as GitHub. Chang et al. (US 20210374562 A1) teaches After high-importance features 238 are identified from the corresponding importance scores 230, the component identifies the subset of primary features excluded from high-importance features 238 and uses a dependency graph and/or feature-transformation metadata for the primary features to identify a set of derived features that are calculated from the excluded primary features (e.g., by aggregating, scaling, combining, bucketizing, or otherwise transforming the excluded features). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IMAD M KASSIM whose telephone number is (571)272-2958. The examiner can normally be reached 10:30AM-5:30PM, M-F (E.S.T.). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J. Huntley can be reached at (303) 297 - 4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IMAD KASSIM/Primary Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

Mar 18, 2022
Application Filed
Apr 19, 2025
Non-Final Rejection — §101, §103
Oct 24, 2025
Response Filed
Jan 16, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596923
MACHINE LEARNING OF KEYWORDS
2y 5m to grant Granted Apr 07, 2026
Patent 12572843
AGENT SYSTEM FOR CONTENT RECOMMENDATIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12572854
ROOT CAUSE DISCOVERY ENGINE
2y 5m to grant Granted Mar 10, 2026
Patent 12566980
SYSTEM AND METHOD HAVING THE ARTIFICIAL INTELLIGENCE (AI) ALGORITHM OF K-NEAREST NEIGHBORS (K-NN)
2y 5m to grant Granted Mar 03, 2026
Patent 12566861
IDENTIFYING AND CORRECTING VULNERABILITIES IN MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+33.8%)
3y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 160 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month