Prosecution Insights
Last updated: April 19, 2026
Application No. 18/639,807

SYSTEM AND METHOD FOR ENRICHING CLOUD USAGE DATA

Final Rejection §101§103
Filed
Apr 18, 2024
Examiner
BOND, REED MADISON
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Oracle International Corporation
OA Round
2 (Final)
6%
Grant Probability
At Risk
3-4
OA Rounds
2y 8m
To Grant
39%
With Interview

Examiner Intelligence

Grants only 6% of cases
6%
Career Allow Rate
1 granted / 18 resolved
-46.4% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
40 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
41.1%
+1.1% vs TC avg
§103
38.3%
-1.7% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. DETAILED ACTION The following FINAL Office Action is in response communication filed on 11/19/2025. Priority The Examiner has noted the Applicants claiming Priority from Provisional Application 63/462,878 filed 04/28/2023. Status of Claims Claims 1, 3-4, 6-10, 12-13, 15-19, 21-22, 24-30 are currently pending. Claims 2, 5, 11, 14, 20, 23 are cancelled by Applicant. Claims 1, 3-4, 6-8, 10, 12-13, 15-17, 19, 21-22, 24-27 are currently amended. Claims 28-30 are newly added. Claims 1, 3-4, 6-10, 12-13, 15-19, 21-22, 24-30 are currently under examination and have been rejected as follows. IDS The information disclosure statements filed on 8/30/2024, 6/30/2025, 8/26/2025, 10/30/2025, 1/19/2026 comply with the provisions of 37 CFR 1.97, 1.98 and MPEP § 609 and are considered by the Examiner. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Response to Amendment The previously pending rejections under 35 USC 101, will be maintained. The 101 rejection is updated in view of the amendments. New grounds for rejection 35 USC 103 are applied as necessitated by the amendments. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Response to Arguments Regarding Applicant’s remarks pertaining to 35 USC 101: Step 2A Prong 1: Applicant argues on page 14 of remarks 11/19/2025: “Applicant respectfully submits that (a) claim 1 is not directed to a judicial exception…. Claim 1 recites a specific manner of reducing end-to-end latency and improving the computational efficiency of generating enriched usage data for cloud resources that provides a specific improvement over prior systems…. Specifically, claim 1 recites executing a chunk-wise enrichment process to enrich raw usage data with corresponding enrichment metadata…. The chunk-wise enrichment process substantially reduces latency over prior systems….” Continued on page 16: “While some of the claim recitations may relate to generating a stream of enriched usage data useable in contracts or business relations, those contracts and/or business relations are not recited in the claim.” Examiner respectfully disagrees. While not literally recited in the claims, the purpose behind enriching the data and partitioning the data into time windows is to improve the process of monitoring, measuring, and evaluating customer usage of provided resources and services by an enterprise, as detailed in Applicant specification ¶ [0008]: “Such enrichment of raw usage data can be utilized to reduce end-to-end latency, negating or removing several steps that negatively contribute to the end-to-end latency including the collecting of the usage data from the service teams at the metering hour intervals, the collecting of the metadata to be associated with the collected usage data, the enriching of the usage data with the proper metadata, the pricing of the usage, and then the step of ultimately preparing an invoice based on the priced usage.” Thus the claims describe, recite, or set forth agreements in the form of contracts and business relations as they pertain to commercial or legal interactions under the larger abstract grouping of Certain Methods of Organizing Human Activity (MPEP 2106.04(a)(2) II). Accordingly, the claims recite an abstract idea. Step 2A Prong 2: Applicant argues on page 17 of remarks 11/19/2025: “…Claim 1 recites a specific manner of reducing end-to-end latency and improving the computational efficiency of generating enriched usage data for cloud resources that provides a specific improvement over prior systems.” Continued on page 18: “Thus claim 1 provides a specific technical solution to the technical problem of latency in prior systems…. “Because claim 1 recites additional elements that integrate the alleged judicial exception into a practical application of that exception, claim 1 is patent-eligible under at least Prong Two of Step 2A.” Examiner respectfully disagrees. Independent claims 1, 10, 19 recite the following additional computer-based elements: “cloud resource”, “cloud service system”, “system”, “computer”, “processor device”, “non-transitory memory”, “non-transitory computer readable storage medium”, and “computer system”. No new additional computer-based elements were presented with the amendments to the claims. The functions of the additional elements include examples such as “storing a dataset”, “enrich raw data with corresponding enrichment data”, “extracting… a first chunk of raw usage data for the first cloud resource”, “determining that (a) the first cloud resource is within the set of cloud resources and (b) the first usage time parameter corresponds to the first time window”, “performing a database join operation on the first set of enrichment metadata and the first chunk of raw usage data”, and “generating and transmitting a first stream of enriched usage data”. The additional elements and their functions are recited at a high level of generality (i.e. as a generic computer performing functions of gathering statistics, monitoring, organizing, aggregating, and merging data, etc.) such that they still amount to no more than mere instructions to apply the exception using generic computer components. While these functions may mitigate latency for preparing priced usage and invoices, insufficient technical details are provided for how raw data is extracted and enriched, how join operations are performed, etc. and thus how technical process steps are removed such that the latency is reduced by the additional elements’ functions, rendering an improvement over existing technology unclear. Therefore, these functions can be viewed as not meaningfully different than a business method or mathematical algorithm being applied on a general-purpose computer as tested per MPEP 2106.05(f)(2)(i). Step 2B: Applicant argues on page 19 of remarks 11/19/2025: “Claim 1 recites techniques that are not well-understood, routine, conventional activity…. The chunk-wise enrichment process substantially reduces latency over prior systems (see, e.g., Specification at [0149] and [0202]). In addition, claim 1 provides improved computational efficiency over prior systems (see, e.g., Specification at [0149] and [0202]).” Continued on page 20: “The Examiner has not demonstrated that the elements recited in claim 1 are well understood, routine, conventional activity.” Examiner respectfully disagrees. Examiner’s previous eligibility analysis did not rely upon establishing well-understood, routine, or convential activities recited in the claims. The Berkheimer conventionality test is one option for examination among MPEP 2106.05(a)-(h), but not required. However, assuming arguendo, further evidence would be required to demonstrate conventionality of the additional, computer-based elements, Examiner would further rely on MPEP 2106.05(d) guidelines to demonstrate that said additional elements are also well-understood, routine, conventional. In such case, Examiner would rely as evidence on Applicant’s own Specification. Applicant Specification ¶ [0144] describes the general function of the additional elements cloud resource and cloud service system: “With some subscription plan management (SPM) cloud service systems, service teams generate regularly occurring reports on usage of cloud resources by customers The cloud resources may be grouped in compartments of the cloud service system that are accessible by designated customers associated with those compartments. The compartments define boundaries that may contain computer resources, database resources, block volume resources, and any other tools or services that might be made available by the cloud host to their customers associated with their respective compartments.” Applicant Specification ¶ [0151] goes on the describe the improvement to the business process, such as “de-serialize the processing and instead process the usage data immediately as it is streamed from the service teams by joining the customer raw resource usage data with entities comprising small metadata units representative of historic time series enrichment data”, and ¶ [0152]: “The entities comprising the small metadata units stored in the entity maps can be utilized to ensure that the raw resource usage data received from the service teams is properly enriched by joining the usage data with metadata units contained in the entity maps having time windows associated therewith that match the times of the use of the resources as indicated in the raw usage data.” However, these and other examples in the specification describe well known, routine or conventional cloud computing techniques such as extracting customer usage data from a cloud service, joining the raw data with metadata to create enriched data, and performing these functions in specific time intervals. The functions are recited in the claims and the specification at a high level of generality (see Applicant Specification ¶ [0243]: “In accordance with various embodiments, the teachings herein can be implemented using one or more computer, computing device, machine, or microprocessor, including one or more processors, memory and/or computer readable storage media programmed according to the teachings herein. Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art). Accordingly, the previously pending rejections under 35 USC 101, will be maintained. The 101 rejection is updated in view of the amendments. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Regarding Applicant’s remarks pertaining to 35 USC 103: Applicant argues starting on page 22 of remarks 11/19/2025: “The cited references fail to teach or disclose selecting the first set of enrichment metadata based on determining that (a) the first cloud resource is within the set of cloud resources and (b) the first usage time parameter corresponds to the first time window.” Examiner considers the claims as amended but finds the argument moot based on new grounds. Examiner presents art reference Dageville et al. US 20230316348 A1, hereinafter Dageville, as analogous art of enriching cloud usage data. The amended claim limitation “selecting the first set of enrichment metadata based on determining that (a) the first cloud resource is within the set of cloud resources and (b) the first usage time parameter corresponds to the first time window” is taught, in combination with original primary reference Moustafa, by Dageville at ¶ [0027, 0085-0086]. Applicant argues on page 24 of remarks 11/19/2025: “The cited references also fail to teach or disclose performing a database join operation on the first set of enrichment metadata and the first chunk of raw usage data to generate a first chunk of enriched usage data.” Examiner considers the claims as amended but finds the argument moot based on new grounds. The amended claim limitation “performing a database join operation on the first set of enrichment metadata and the first chunk of raw usage data to generate a first chunk of enriched usage data” is taught, in combination with original primary reference Moustafa, by Dageville at ¶ [0012, 0016, 0033]. Other deficiencies in primary reference Moustafa not cured by original secondary reference Sundaram in teaching the claims as amended are addressed by Dageville as necessitated by the amendments. Additional details and citations can be found in the 103 rejection section below. Accordingly, new grounds for rejection 35 USC 103 are applied as necessitated by the amendments. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-4, 6-10, 12-13, 15-19, 21-22, 24-30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1, 3-4, 6-9, 28 are directed to a method or process which is a statutory category. Claims 10, 12-13, 15-18, 29 are directed to a system or machine which is a statutory category. Claims 19, 21-22, 24-27, 30 are directed to a non-transitory computer readable storage medium or article of manufacture which is a statutory category. Step 2A Prong One: The claims recite, describe, or set forth a judicial exception of an abstract idea (see MPEP 2106.04(a)). Specifically, the claims recite, describe or set forth agreements in the form of contracts and business relations, including: “extracting a stream of raw usage data representative of resource usage for a first cloud resource of the cloud service system, a first chunk of raw usage data for the first cloud resource”, “the first chunk of raw usage data comprising (a) a resource identification parameter that identifies the first cloud resource from among a plurality of cloud resources available in the cloud service system, and (b) a first usage time parameter corresponding to usage of the first cloud resource”, and “performing a database join operation on the first set of enrichment metadata and the first chunk of raw usage data to generate a first chunk of enriched usage data”. Monitoring, measuring, and evaluating customer usage of provided resources and services by an enterprise fall within agreements in the form of contracts and business relations as they pertain to commercial or legal interactions under the larger abstract grouping of Certain Methods of Organizing Human Activity (MPEP 2106.04(a)(2) II). Accordingly, the claims recite an abstract idea. Step 2A Prong Two: Independent claims 1, 10, 19 recite the following additional elements: “cloud resource”, “cloud service system”, “system”, “computer”, “processor device”, “non-transitory memory”, “non-transitory computer readable storage medium”, and “computer system”. The additional elements are recited at a high level of generality (i.e. as a generic computer performing functions of gathering statistics, monitoring, and aggregating data, etc.) such that they amount to no more than mere instructions to apply the exception using generic computer components. Therefore, these functions can be viewed as not meaningfully different than a business method or mathematical algorithm being applied on a general-purpose computer as tested per MPEP 2106.05(f)(2)(i). The claims are directed to an abstract idea and the judicial exception does not integrate the abstract idea into a practical application. Step 2B: According to MPEP 2106.05(f)(1), considering whether the claim recites only the idea of a solution or outcome i.e., the claims fail to recite the technological details of how the actual technological solution to the actual technological problem is accomplished. The recitation of claim limitations that attempt to cover an entrepreneurial and thus abstract solution to an entrepreneurial problem with no technological details on how the technological result is accomplished and no description of the mechanism for accomplishing the result do not provide significantly more than the judicial exception. Dependent claims 6, 15, 24 recite the additional element “consumer processor device”. The additional element is also recited at a high level of generality (i.e. as a generic computer performing functions of processing and organizing data, being associated with a cloud service system, streaming usage data, etc.) such that it amounts to no more than mere instructions to apply the exception using generic computer components. Dependent claims 2-5, 7-9, 11-14, 16-18, 20-23, 25-30 do not appear to provide further additional computer-based elements, let alone for such additional computer-based elements to integrate the abstract idea into practical application (Step 2A prong two) or providing significantly more (Step 2B). Further, dependent claims 3-4, 7-9, 12-13, 16-18, 21-22, 25-30 merely incorporate the additional elements recited in claims 1, 10, 19 along with further narrowing of the abstract idea of claims 1, 10, 19 and their execution of the abstract idea. Specifically, the dependent claims narrow the “cloud resource”, “cloud service system”, “system”, “computer”, “processor device”, “non-transitory memory”, “non-transitory computer readable storage medium”, and “computer system” to capabilities such as selecting, joining, generating, indicating, associating, comprising, receiving, refreshing, storing, and adding various forms of data such as enrichment metadata, usage time parameters, time windows, time-series data, datasets, enriched usage data, streams, use times, boundary identification parameters, resource identification parameters, item values, etc. which, when evaluated per MPEP 2106.05(f)(2) represent mere invocation of computers to perform existing processes. Therefore, the additional elements recited in the claimed invention individually and in combination fail to integrate a judicial exception into a practical application (Step 2A prong two) and for the same reasons they also fail to provide significantly more (Step 2B). Thus, claims 1, 3-4, 6-10, 12-13, 15-19, 21-22, 24-30 are reasoned to be patent ineligible. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- REJECTIONS BASED ON PRIOR ART Examiner Note: Some rejections will contain bracketed comments preceded by an “EN” that will denote an examiner note. This will be placed to further explain a rejection. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-4, 6-10, 12-13, 15-19, 21-22, 24-30 are rejected under 35 U.S.C. 103 as being unpatentable over: Moustafa US 20220335340 A1, hereinafter Moustafa in view of Dageville US 20230316348 A1, hereinafter Dageville. As per, Regarding Claims 1, 10, 19: Moustafa teaches: (Claim 1) A method, comprising: (Claim 10) A system, comprising: (Claim 19) A non-transitory computer readable storage medium having instructions thereon, which when performed in a computer system including a processor device operatively coupled the non- transitory computer readable storage medium cause the computer to perform steps comprising: storing a dataset comprising (a) [..] enrichment metadata [..] for a set of cloud resources of a cloud service system and (b) [..] enrichment metadata [..] for the set of cloud resources, [..] (Moustafa ¶ [0106]: The ADM system 600 includes an example resource manager 642 and an example distributed datastore 644, which includes an example metadata storage 646, and an example raw data store 648. ¶ [0057]: the ADM system may evaluate (e.g., continuously evaluate) data within one or more non target data streams to establish a baseline pattern (e.g., a nominal pattern, etc.) of one or more data streams being consumed by other Al application nodes over time. ¶ [0065]: the ADM system may utilize a number of types of attributes to classify data and metadata. For example, sensor inputs, times and dates, classifications of visual objects in a video feed, chains of receivership, blockchaining for security, among other attributes to support a trusted “paper trail” for data use and ownership); and [..], the [..] enrichment process comprising: extracting, from a stream of raw usage data representative of resource usage for a first cloud resource of the cloud service system, [..] raw usage data for the first cloud resource (See Moustafa Fig. 14 element 1438: Central Node/Cloud (DDI Manager) [EN: cloud service system]. ¶ [0325]: The example machine readable instructions and / or the example operations 2500 of FIG. 25 begin at block 2502, at which the data usage monitoring circuitry 1300 instantiates / deploys a DDI node. For example, the resource manager orchestration circuitry 1304 can orchestrate resources in the edge environment to instantiate a DDI node…. In some examples, the DDI node is a virtual node [EN: cloud resource] that is instantiated within another physical node. For example, the resource manager orchestration circuitry 1304 can deploy a virtual DDI node into the AI application node that is to be tracked. ¶ [0326]: At block 2504, the data usage monitoring circuitry 1300 ingests network traffic [EN: raw usage data] (e.g., ingests data in data streams in the network traffic) at the DDI node. For example, the interface circuitry 1302 (FIG. 13) can ingest [EN: extract] data, such as video data or audio data. [Also see Fig. 25 element 2504: Ingest network traffic at DDI node over period of time; and Figs. 13, 14, 25 and related text]), the [..] raw usage data comprising: (a) a resource identification parameter that identifies the first cloud resource from among a plurality of cloud resources available in the cloud service system (Moustafa ¶ [0050]: In some examples, the content type of a data stream, a sensitive attribute of a data stream, a security level of a data stream, a source location of a source node sourcing the data stream, etc., may be described as such characteristics within the data stream. For example, flags associated with such characteristics may be in the headers of data packets within the data stream. In some examples, data within the data stream may be tagged with metadata describing such characteristics. In some examples, data analysis logic within one or more nodes with access to the data stream may determine such data stream characteristics (e.g., characteristics regarding the content type of a data stream, a sensitive attribute of a data stream, a security level of a data stream, a source location of a data stream [EN: resource ID parameter], etc.) by analyzing the data in the data stream (e.g., analyzing a data payload within one or more data packets in the data stream). [Also see Fig. 23 element 2302: determine a service type attribute of the AI application node; and element 2304: determine a usage context of the AI application node; and related text]), and (b) a [..] usage time parameter that indicates the reported [..] time (Moustafa ¶ [0065]: the ADM system may utilize a number of types of attributes [EN: parameters] to classify data and metadata. For example, sensor inputs, times and dates, classifications of visual objects in a video feed, chains of receivership, blockchaining for security, among other attributes to support a trusted “paper trail” for data use and ownership); [..]; performing a [..] operation on the [..] set of enrichment metadata and the [..] raw usage data to generate [..] enriched usage data (Moustafa ¶ [0354]: At block 2904, the data usage monitoring circuitry 1300, based on policy 1322, may tag data in the target data stream with metadata for tracking. For example, the DRM circuitry 1318 may tag data in the target data stream with metadata tags corresponding to tracking information about the time, date, source location, ownership of the data, identification of target AI application node attempting the consumption, and / or other types of tags); and generating and transmitting a [..] stream of enriched usage data comprising the [..] enriched usage data (Moustafa mid-¶ [0163]: For example, a first one of the edge switches 804 may observe 10% of the edge network environment 800 and a second one of the edge switches 804 may observe 90% of the edge network environment 800, which may become the basis for the differences in data outputs generated by the algorithms 638 executed by the first and second one of the devices. In some such examples, the first one of the devices may transmit and/or otherwise propagate data outputs from its execution of the algorithms 638 to the second one of the devices). Although Moustafa teaches storing a dataset comprising enrichment metadata, extracting raw usage data from a cloud resource, resource identification parameters, usage time parameters, and generating enriched metadata, Moustafa does not specifically teach multiple raw usage data and enrichment metadata datasets as “chunks” with distinct and different time windows. However, Dageville in analogous art of enriching cloud usage data teaches or suggests: [..] (a) a first set of enrichment metadata corresponding to a first time window for a set of cloud resources of a cloud service system and (b) a second set of enrichment metadata corresponding to a second time window for the set of cloud resources, wherein the first time window is different from the second time window (Dageville mid-¶ [0088]: For example, if the incremental interval is a day, then there are on average 30 incremental intervals during a billing interval (a month in the current example). Thus, during each incremental interval [EN: different time periods], the monetizer 405 may monitor the usage level of each monetized listing during that incremental interval and store the usage of each monetized listing during that incremental interval in table 404F as an incremental monetizer record. More specifically, the monetizer 405 may read unprocessed data from the table 404E, and perform an enrichment to normalize the data in the table 404E, since the data in table 404E is at a daily granularity and only has the query counts, while the data in table 404F needs to have the fields from the pricing plan... It should be noted that the incremental interval may be any appropriate time period (e.g., hour, two hours, a day, etc…. For example, the monetizer 405's reading of raw data from the stream of the table 404A, extraction of the listing usage information, and storage of the result into table 404C may occur every 15 mins as data from the Job DPO 430 is imported every 15 mins, Meanwhile, the merging of the metadata received from the listing import DPO 440 into the table 404B may occur every hour as data from the listing import DPO 440 is imported once an hour)); executing a chunk-wise enrichment process to enrich raw usage data with corresponding enrichment metadata (Dageville ¶ [0088]: …More specifically, the monetizer 405 may read unprocessed [EN: raw] data from the table 404E, and perform an enrichment to normalize the data in the table 404E, since the data in table 404E is at a daily granularity and only has the query counts, while the data in table 404F needs to have the fields from the pricing plan... It should be noted that the incremental interval [EN: chunk] may be any appropriate time period (e.g., hour, two hours, a day, etc…. For example, the monetizer 405's reading of raw data from the stream of the table 404A, extraction of the listing usage information, and storage of the result into table 404C may occur every 15 mins as data from the Job DPO 430 is imported every 15 mins, Meanwhile, the merging of the metadata received from the listing import DPO 440 into the table 404B may occur every hour as data from the listing import DPO 440 is imported once an hour)); selecting the first set of enrichment metadata based on determining that (a) the first cloud resource is within the set of cloud resources and (b) the first usage time parameter corresponds to the first time window (Dageville ¶ [0027]: Query processing 130 may handle query execution within elastic clusters of virtual machines, referred to herein as virtual warehouses or data warehouses…. The virtual warehouses 131 may be one or more virtual machines operating on the cloud computing platform 110. Mid-¶ [0085]: …the monetizer 405 may schedule a price plan extraction task to run at regular intervals (e.g., every hour or more frequently) which will retrieve all the latest versions of the metadata for each monetized listing from the table 404B. This task may run at any appropriate interval (e.g., on an hourly basis). The monetizer 405 may then determine for each monetized listing, whether there is a price entry for the current month in table 404D. Mid-¶ [0086]: For each identified monetized listing (identified based on import IDs), the monetizer 405 may look up the corresponding entry in the listing import DPO 440, and obtain the pricing plan for that monetized listing. If the monetizer 405 determines that the job ID corresponds to the first query [EN: see Fig. 1A query processing with set of ‘virtual warehouse’ cloud resources] in the current billing interval [EN: first time window], then the monetizer 405 may add a fixed price charge and then add a per-query charge for each subsequent use of the monetized listing); performing a database join operation on the first set of enrichment metadata and the first chunk of raw usage data to generate a first chunk of enriched usage data (Dageville mid-¶ [0012]: Data cleaning, de-identification, aggregation, joining, and other forms of data enrichment need to be performed by the owner of data before it is shareable with another party. Mid-¶ [0016]: In addition, participants in a private ecosystem data exchange may work together to join their datasets [EN: chunks of raw data] together to jointly create a useful data product [EN: chunk of enriched data] that any one of them alone would not be able to produce. Once these joined datasets are created, they may be listed on the data exchange or on the data marketplace. End-¶ [0033]: In addition, database operations (joining, aggregating, analysis, etc.) ascribed to a user (consumer or provider) shall be understood to include performing of such actions by the cloud computing service 112 in response to an instruction from that user). Dageville and Moustafa are found as analogous art of enriching cloud usage data. It would have been obvious to one skilled in the art, before the effective filing date of the invention, to have modified Moustafa’s data usage monitoring system, apparatus, and method to have included Dageville’s teachings around data enrichment applied to specific chunks of data streams and time windows. The benefit of these additional features would have reduced costs and latency for customers (Dageville ¶ [0012]). The predictability of such modifications and/or variations, would have been corroborated by the broad level of skill of one of ordinary skills in the art as articulated by Moustafa in view of Dageville (see MPEP 2143 G). Further, the claimed invention could have also been viewed as a mere combination of old elements in a similar field of enriching cloud usage data. In such combination each element would have merely performed the same function as it did separately. Thus, one of ordinary skill in the art would have recognized that, given existing technical ability to combine the elements, as evidenced by Moustafa in view of Dageville above, the to- be combined elements would have fit together like pieces of a puzzle in a logical, complementary, technologically feasible and/or economically desirable manner. Thus, it would have been reasoned that the results of the combination would have been predictable (see MPEP 2143 A). Regarding Claims 3, 12, 21: Moustafa / Dageville teaches all the limitations of claims 1, 10, 19 above. Moustafa further teaches: extracting, from the stream of raw usage data, [..] raw usage data for the first cloud resource, the [..] raw usage data comprising: (a) the resource identification parameter (Moustafa ¶ [0050]: In some examples, the content type of a data stream, a sensitive attribute of a data stream, a security level of a data stream, a source location of a source node sourcing the data stream, etc., may be described as such characteristics within the data stream. For example, flags associated with such characteristics may be in the headers of data packets within the data stream. In some examples, data within the data stream may be tagged with metadata describing such characteristics. In some examples, data analysis logic within one or more nodes with access to the data stream may determine such data stream characteristics (e.g., characteristics regarding the content type of a data stream, a sensitive attribute of a data stream, a security level of a data stream, a source location of a data stream [EN: resource ID parameter], etc.) by analyzing the data in the data stream (e.g., analyzing a data payload within one or more data packets in the data stream). [Also see Fig. 23 element 2302: determine a service type attribute of the AI application node; and element 2304: determine a usage context of the AI application node; and related text]), and (b) a [..] usage time parameter corresponding to usage of the [..] cloud resource (Moustafa ¶ [0065]: the ADM system may utilize a number of types of attributes [EN: parameters] to classify data and metadata. For example, sensor inputs, times and dates, classifications of visual objects in a video feed, chains of receivership, blockchaining for security, among other attributes to support a trusted “paper trail” for data use and ownership); [..]; and performing a [..] operation on the [..] set of enrichment metadata and the [..] raw usage data to generate [..] enriched usage data, wherein the [..] stream of enriched usage data comprises [..] enriched usage data (Moustafa ¶ [0354]: At block 2904, the data usage monitoring circuitry 1300, based on policy 1322, may tag data in the target data stream with metadata for tracking. For example, the DRM circuitry 1318 may tag data in the target data stream with metadata tags corresponding to tracking information about the time, date, source location, ownership of the data, identification of target AI application node attempting the consumption, and / or other types of tags). Although Moustafa teaches storing a dataset comprising enrichment metadata, extracting raw usage data from a cloud resource, resource identification parameters, usage time parameters, and generating enriched metadata, Moustafa does not specifically teach a database join operation with multiple raw usage data and enrichment metadata datasets as “chunks” with distinct and different time windows. However, Dageville in analogous art of enriching cloud usage data teaches or suggests: selecting the second set of enrichment metadata based at least on determining that the first cloud resource is within the set of cloud resources (Dageville ¶ [0027]: Query processing 130 may handle query execution within elastic clusters of virtual machines, referred to herein as virtual warehouses or data warehouses…. The virtual warehouses 131 may be one or more virtual machines operating on the cloud computing platform 110. Mid-¶ [0085]: …the monetizer 405 may schedule a price plan extraction task to run at regular intervals (e.g., every hour or more frequently) which will retrieve all the latest versions [EN: multiple sets] of the metadata for each monetized listing from the table 404B. This task may run at any appropriate interval (e.g., on an hourly basis). The monetizer 405 may then determine for each monetized listing, whether there is a price entry for the current month in table 404D. Mid-¶ [0086]: For each identified [EN: multiple] monetized listing (identified based on import IDs), the monetizer 405 may look up the corresponding entry in the listing import DPO 440, and obtain the pricing plan for that monetized listing. If the monetizer 405 determines that the job ID corresponds to the first query [EN: see Fig. 1A query processing with set of ‘virtual warehouse’ cloud resources] in the current billing interval, then the monetizer 405 may add a fixed price charge and then add a per-query charge for each subsequent [EN: multiple] use of the monetized listing); performing a database join operation on the second set of enrichment metadata and the second chunk of raw usage data to generate a second chunk of enriched usage data, wherein the first stream of enriched usage data comprises the second chunk of enriched usage data (Dageville mid-¶ [0012]: Data cleaning, de-identification, aggregation, joining, and other forms of data enrichment need to be performed by the owner of data before it is shareable with another party. Mid-¶ [0016]: In addition, participants in a private ecosystem data exchange may work together to join their datasets [EN: chunks of raw data] together to jointly create a useful data product [EN: chunk of enriched data] that any one of them alone would not be able to produce. Once these joined datasets are created, they may be listed on the data exchange or on the data marketplace. End-¶ [0033]: In addition, database operations (joining, aggregating, analysis, etc.) ascribed to a user (consumer or provider) shall be understood to include performing of such actions by the cloud computing service 112 in response to an instruction from that user). Dageville and Moustafa are found as analogous art of enriching cloud usage data. It would have been obvious to one skilled in the art, before the effective filing date of the invention, to have modified Moustafa’s data usage monitoring system, apparatus, and method to have included Dageville’s teachings around a database join operation with multiple raw usage data and enrichment metadata datasets as “chunks” with distinct and different time windows. The benefit of these additional features would have reduced costs and latency for customers (Dageville ¶ [0012]). The predictability of such modifications and/or variations, would have been corroborated by the broad level of skill of one of ordinary skills in the art as articulated by Moustafa in view of Dageville (see MPEP 2143 G). Further, the claimed invention could have also been viewed as a mere combination of old elements in a similar field of enriching cloud usage data. In such combination each element would have merely performed the same function as it did separately. Thus, one of ordinary skill in the art would have recognized that, given existing technical ability to combine the elements, as evidenced by Moustafa in view of Dageville above, the to- be combined elements would have fit together like pieces of a puzzle in a logical, complementary, technologically feasible and/or economically desirable manner. Thus, it would have been reasoned that the results of the combination would have been predictable (see MPEP 2143 A). Regarding Claims 4, 13, 22: Moustafa / Dageville teaches all the limitations of claims 3, 12, 21 above. Although Moustafa teaches enriching streams of usage data with metadata, Moustafa does not specifically teach determining a specific time window for the enrichment. However, Dageville in analogous art of enriching cloud usage data teaches or suggests: wherein selecting the second set of enrichment metadata is further based on determining that the second usage time parameter corresponds to the second time window (Dageville mid-¶ [0088]: For example, if the incremental interval is a day, then there are on average 30 incremental intervals during a billing interval (a month in the current example). Thus, during each incremental interval [EN: different time periods], the monetizer 405 may monitor the usage level of each monetized listing during that incremental interval and store the usage of each monetized listing during that incremental interval in table 404F as an incremental monetizer record. More specifically, the monetizer 405 may read unprocessed data from the table 404E, and perform an enrichment to normalize the data in the table 404E, since the data in table 404E is at a daily granularity and only has the query counts, while the data in table 404F needs to have the fields from the pricing plan... It should be noted that the incremental interval may be any appropriate time period (e.g., hour, two hours, a day, etc…. For example, the monetizer 405's reading of raw data from the stream of the table 404A, extraction of the listing usage information, and storage of the result into table 404C may occur every 15 mins as data from the Job DPO 430 is imported every 15 mins, Meanwhile, the merging of the metadata received from the listing import DPO 440 into the table 404B may occur every hour as data from the listing import DPO 440 is imported once an hour)). Rationales to have modified / combined Moustafa / Dageville are above in claim 1, 10, 19 and reincorporated. Regarding Claims 6, 15, 24: Moustafa / Dageville teaches all the limitations of claims 1, 10, 19 above. Moustafa further teaches: wherein the first cloud resource is held in a first cloud boundary of the cloud service system available to an associated consumer processor device affiliated with the first cloud boundary, wherein the first chunk of raw usage data comprises a boundary identification parameter that identifies the first cloud boundary holding the first cloud resource (Moustafa ¶ [0085]: It should be understood that some of the devices in 410 are multi-tenant devices where Tenant 1 may function within a tenant1 ‘slice' while a Tenant 2 may function within a tenant2 slice…. A trusted multi-tenant device may further contain a tenant specific cryptographic key such that the combination of key and slice may be considered a “root of trust” (RoT) or tenant specific RoT. A RoT may further be computed dynamically composed using a DICE ([EN: consumer] Device Identity Composition Engine) architecture. Mid-¶ [0086]: Cloud computing nodes often use containers [EN: cloud boundaries], FaaS engines, servlets, servers, or other computation abstraction that may be partitioned according to a DICE layering and fan-out structure [EN: boundary identification parameter] to support a RoT context for each), and wherein selecting the first set of enrichment metadata is further based on the boundary identification parameter (Moustafa ¶ [0354]: At block 2904, the data usage monitoring circuitry 1300, based on policy 1322, may tag data in the target data stream with metadata for tracking. For example, the DRM circuitry 1318 may tag data in the target data stream with metadata tags corresponding to tracking information about the time, date, source location, ownership of the data, identification of target AI application node attempting the consumption, and / or other types of tags [EN: boundary identification parameters discussed in previous limitation]). Regarding Claims 7, 16, 25: Moustafa / Dageville teaches all the limitations of claims 1, 10, 19 above. Moustafa further teaches: refreshing the dataset by: receiving updated value data representative of an updated value of an item of the first set of enrichment metadata or the second set of enrichment metadata (Moustafa ¶ [0114]: In the illustrated example, the ADM system 600 includes the data publishing manager 618 to implement publish - subscribe messaging. For example, a subscriber (e.g., a data subscriber, a device subscriber, etc.) may coordinate with the scheduler 620 to subscribe to [EN: and thus receive] changes, updates, etc., of data of the metadata storage 646, the raw datastore 648, and / or one (s) of the data sources 604 [EN: first or second]); [..] storing the updated value data [..] in the dataset by adding the updated value data [..] to the dataset as a third set of enrichment metadata (Moustafa ¶ [0294]: If, at block 2010, the data usage monitoring circuitry 1300 determines to update the baseline data based on the outputs, control proceeds to block 2012. At block 2012, the data usage monitoring circuitry 1300 updates the base line data in a datastore based on the outputs. ¶ [0410]: …wherein the instructions… instantiate a first super node in the edge environment, and deploy a second instantiation of the machine learning model to the first super node, execute the second instantiation of the machine learning model based on a first plurality of data streams within network traffic ingested at the first super node to generate one or more second outputs, the one or more second outputs including values representative of data stream characteristics and values representative of AI application node characteristics, share the one or more second outputs with a second super node in the edge environment, obtain one or more third outputs from the second super node, the one or more third outputs generated from a third instantiation of the machine learning model executed at the second super node based on a second plurality of data streams within network traffic ingested at the second super node, the one or more third outputs including values representative of data stream characteristics and values [EN: metadata] representative of AI application node characteristics, and train the machine learning model using at least one of the one or more second outputs or the one or more third outputs to build a consensus nominal data stream pattern). Although Moustafa teaches receiving and storing updated enrichment metadata as a third set of enrichment metadata, Moustafa does not specifically teach associating the update with a time window or storing updated time window data. However, Dageville in analogous art of enriching cloud usage data teaches or suggests: associating the updated value of the item with an updated time window defined by updated time window data (Dageville end-¶ [0035]: Other information included in the metadata 204 may be metadata for use by business intelligence tools, text description of data contained in the table, keywords associated with the table to facilitate searching, a link (e.g., URL) to documentation related to the shared data, and a refresh interval indicating how frequently the shared data is updated along with the date [EN: time window] the data was last updated. ¶ [0080]: updated on: this field provides a timestamp of the last update); storing the updated value data in association with the updated time window data in the dataset by adding [..] the updated time window data to the dataset [..] (Dageville end-¶ [0035]: Other information included in the metadata 204 may be metadata for use by business intelligence tools, text description of data contained in the table, keywords associated with the table to facilitate searching, a link (e.g., URL) to documentation related to the shared data, and a refresh interval indicating how frequently the shared data is updated along with the date [EN: time window] the data was last updated. ¶ [0080]: updated on: this field provides a timestamp of the last update). Dageville and Moustafa are found as analogous art of enriching cloud usage data. It would have been obvious to one skilled in the art, before the effective filing date of the invention, to have modified Moustafa’s data usage monitoring system, apparatus, and method to have included Dageville’s teachings around associating updates with a time window and storing updated time window data. The benefit of these additional features would have reduced costs and latency for customers (Dageville ¶ [0012]). The predictability of such modifications and/or variations, would have been corroborated by the broad level of skill of one of ordinary skills in the art as articulated by Moustafa in view of Dageville (see MPEP 2143 G). Further, the claimed invention could have also been viewed as a mere combination of old elements in a similar field of enriching cloud usage data. In such combination each element would have merely performed the same function as it did separately. Thus, one of ordinary skill in the art would have recognized that, given existing technical ability to combine the elements, as evidenced by Moustafa in view of Dageville above, the to- be combined elements would have fit together like pieces of a puzzle in a logical, complementary, technologically feasible and/or economically desirable manner. Thus, it would have been reasoned that the results of the combination would have been predictable (see MPEP 2143 A). Regarding Claims 8, 17, 26: Moustafa / Dageville teaches all the limitations of claims 7, 16, 25 above. Moustafa further teaches: performing a [..] operation on the [..] set of enrichment metadata and the [..] raw usage data to generate [..] enriched usage data, wherein the [..] stream of enriched usage data comprises [..] enriched usage data (Moustafa ¶ [0354]: At block 2904, the data usage monitoring circuitry 1300, based on policy 1322, may tag data in the target data stream with metadata for tracking. For example, the DRM circuitry 1318 may tag data in the target data stream with metadata tags [EN: updated values] corresponding to tracking information about the time, date, source location, ownership of the data, identification of target AI application node attempting the consumption, and / or other types of tags). Although Moustafa teaches storing a dataset comprising enrichment metadata, extracting raw usage data from a cloud resource, resource identification parameters, usage time parameters, and generating enriched metadata, Moustafa does not specifically teach multiple raw usage data and enrichment metadata datasets as “chunks” with distinct and different time windows. However, Dageville in analogous art of enriching cloud usage data teaches or suggests: performing a database join operation on the third set of enrichment metadata and a third chunk of raw usage data extracted from the stream of raw usage data to generate a third chunk of enriched usage data, wherein the first stream of enriched usage data comprises the second chunk of enriched usage data (Dageville mid-¶ [0012]: Data cleaning, de-identification, aggregation, joining, and other forms of data enrichment need to be performed by the owner of data before it is shareable with another party. Mid-¶ [0016]: In addition, participants in a private ecosystem data exchange may work together to join their datasets [EN: chunks of raw data] together to jointly create a useful data product [EN: chunk of enriched data] that any one of them alone would not be able to produce. Once these joined datasets are created, they may be listed on the data exchange or on the data marketplace. End-¶ [0033]: In addition, database operations (joining, aggregating, analysis, etc.) ascribed to a user (consumer or provider) shall be understood to include performing of such actions by the cloud computing service 112 in response to an instruction from that user. Mid-¶ [0085]: …the monetizer 405 may schedule a price plan extraction task to run at regular intervals (e.g., every hour or more frequently) which will retrieve all the latest versions [EN: multiple sets, first, second, third, etc.] of the metadata for each monetized listing from the table 404B.) Rationales to have modified / combined Moustafa / Dageville are above in claims 3, 12, 21 and reincorporated. Regarding Claims 9, 18, 27: Moustafa / Dageville teaches all the limitations of claims 1, 10, 19 above. Moustafa further teaches: wherein the dataset comprises an entity map dataset (Moustafa mid-¶ [0056]: In some examples, the ADM system may utilize Artificial Intelligence / Machine Learning (AI / ML) modeling techniques and / or data graph techniques to map, associate, and / or otherwise correlate relevant datasets to one another. For example, in order to determine whether consumption / usage of the data in the data stream is ethical, comparisons of the target AI application node's consumption / usage (or attempts thereof) of the data stream to (known ethical) baseline / nominal consumption / usages from other Al application nodes may be implemented through the AI / ML modeling techniques and / or data graph techniques). Regarding Claims 28, 29, 30: Moustafa / Dageville teaches all the limitations of claims 1, 10, 19 above. Although Moustafa teaches storing a dataset comprising enrichment metadata and generating enriched metadata, Moustafa does not specifically teach selecting enrichment data by distinguishing between time windows. However, Dageville in analogous art of enriching cloud usage data teaches or suggests: wherein the first set of enrichment data is further selected based on determining that the first usage time parameter does not correspond to the second time window (Dageville mid-¶ [0085]: …the monetizer 405 may schedule a price plan extraction task to run at regular intervals (e.g., every hour or more frequently) which will retrieve all the latest versions of the metadata for each monetized listing from the table 404B. This task may run at any appropriate interval (e.g., on an hourly basis). The monetizer 405 may then determine for each monetized listing, whether there is a price entry for the current month in table 404D. Mid-¶ [0086]: For each identified monetized listing (identified based on import IDs), the monetizer 405 may look up the corresponding entry in the listing import DPO 440, and obtain the pricing plan for that monetized listing. If the monetizer 405 determines that the job ID corresponds to the first query in the current billing interval [EN: distinguishing between time windows], then the monetizer 405 may add a fixed price charge and then add a per-query charge for each subsequent use of the monetized listing). Dageville and Moustafa are found as analogous art of enriching cloud usage data. It would have been obvious to one skilled in the art, before the effective filing date of the invention, to have modified Moustafa’s data usage monitoring system, apparatus, and method to have included Dageville’s teachings around distinguishing between time windows when generating enriched metadata. The benefit of these additional features would have reduced costs and latency for customers (Dageville ¶ [0012]). The predictability of such modifications and/or variations, would have been corroborated by the broad level of skill of one of ordinary skills in the art as articulated by Moustafa in view of Dageville (see MPEP 2143 G). Further, the claimed invention could have also been viewed as a mere combination of old elements in a similar field of enriching cloud usage data. In such combination each element would have merely performed the same function as it did separately. Thus, one of ordinary skill in the art would have recognized that, given existing technical ability to combine the elements, as evidenced by Moustafa in view of Dageville above, the to- be combined elements would have fit together like pieces of a puzzle in a logical, complementary, technologically feasible and/or economically desirable manner. Thus, it would have been reasoned that the results of the combination would have been predictable (see MPEP 2143 A). ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Conclusion The following art is made of record and considered pertinent to Applicant’s disclosure: Chasman; Mike et al. US 20200014659 A1, System and method for midserver facilitation of long-haul transport of telemetry for cloud-based services. Pinheiro; Ralph Joseph et al. US 20220103618 A1, Data pipeline architecture. Yanamandra; Sangeetha et al. US 20200117757 A1, Real-time monitoring and reporting systems and methods for information access platform. Kothari; Ankit et al. US 20220091947 A1, Systems and methods for performing a technical recovery in a cloud environment. Suttle; Scott et al. US 20230342179 A1, Compliance across multiple cloud environments. Glickman; Matthew J. et al. US 20230385286 A1, Overlap results data generation on a cloud data platform Ji; Chaoping et al. US 20210279109 A1, Method and apparatus for acquiring information. Dageville; Benoit et al. US 20230316348 A1, Usage monitoring and usage based data pricing. Jiang; Xiaoxiao et al. US 20180219784 A1, Traffic control platform. Cooley; Shaun et al. US 20220150124 A1, Automatic discovery of automated digital systems through link salience. Seetharaman; Ganesh et al. US 20180052861 A1, System and method for metadata-driven external interface generation of application programming interfaces. Podder; soumyajit US 20240320240 A1, System for consolidating data attributes for optimizing data processing and a method thereof. Ferris; James Michael et al. US 20130304925 A1, Cloud deployment analysis featuring relative cloud resource importance. Kisser; Lauren M et al. US 11720536 B1, Data enrichment as a service. Cannaliato; Thomas James et al. US 9092502 B1, System and method for correlating cloud-based big data in real-time for intelligent analytics and multiple end uses. Abdul Rasheed et al. WO 2021174104 A1, Modification of data in a time-series data lake. S. Singh and Y. Liu, "A cloud service architecture for analyzing big monitoring data," in Tsinghua Science and Technology, vol. 21, no. 1, pp. 55-70, Feb. 2016, doi: 10.1109/TST.2016.7399283 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to REED M. BOND whose telephone number is (571) 270-0585. The examiner can normally be reached Monday - Friday 8:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia Munson can be reached at (571) 270-5396. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /REED M. BOND/Examiner, Art Unit 3624 August 11, 2025 /HAMZEH OBAID/Primary Examiner, Art Unit 3624 February 10, 2026
Read full office action

Prosecution Timeline

Apr 18, 2024
Application Filed
Aug 11, 2025
Non-Final Rejection — §101, §103
Nov 17, 2025
Examiner Interview Summary
Nov 17, 2025
Applicant Interview (Telephonic)
Nov 19, 2025
Response Filed
Feb 10, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586012
PROVIDING UNINTERRUPTED REMOTE CONTROL OF A PRODUCTION DEVICE VIA VIRTUAL REALITY DEVICES
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
6%
Grant Probability
39%
With Interview (+33.3%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month