Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
2. Claims 1-4, 6-11, 13-18 and 20-21 are pending,
3. Claims 1, 8 and 15 being independent.
4. In this Amendment, Applicant has amended claims 1-4, 6, 8, 9, 11, 13, 15, 16, 18, and 20, newly added claim 21, and cancelled claims 5, 12, and 19.
5. This office action is in response to the REM filed 02/16/2026.
6. The office action is made Final.
Claim Rejections - 35 USC § 112
7. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION. —the specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
8. Claims 1-4, 6-11, 13-18 and 20-21 are rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, regards as the invention.
9. Regarding claims 1, 8, 15, the phrases “such that” introduces a functional requirement that is not supported by sufficient structure, and it creates an ambiguous functional result which renders the claims vague, unclear and indefinite.
Dependent claims are rejected under 35 U.S.C. 112(b) due to their dependence on independent claims, carrying the same deficiencies.
Examiner Note
10. The Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the Applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner.
Claim Rejections - 35 USC § 103
11. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
12. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
a) A patent may not be obtained through the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
13. Claims 1-4, 6-11, 13-18 and 20-21 are rejected under 35 U.S.C.103 as being unpatentable over Esman (US 11450419 B1) hereinafter as Esman in view of LIoyd et al (US 20200005045 A1) hereinafter as LIoyd.
14. Regarding claim 1, Esman teaches A system for generating machine learning feature vectors, the system comprising:
an event ingestion module (Fig 1 & 2, “data intake and query system”, col 6, lines 38-40, “using an event-based data intake and query system, such as the SPLUNK® ENTERPRISE system developed by Splunk Inc… providing real-time operational intelligence that enables organizations to collect, index, and search machine data from various websites, applications, servers, networks, and mobile devices that power their businesses.”, col 135, lines 46-63, “analyzing and searching large amounts of machine data presents a number of challenges that can be addressed using an event-based data intake and query system, such as the SPLUNK® ENTERPRISE system. In addition to facilitating the collection and indexing of any type of machine data”) that is configured to:
ingest data indicative of events associated with a plurality of entities from one or more sources (col 6, lines 53-67, “In general, each event has a portion of machine data that is associated with a timestamp that is derived from the portion of machine data in the event.”, col 7, lines 30-37, “The data intake and query system uses a flexible schema to specify how to extract information from events…a flexible schema may be applied to events “on the fly,” when it is needed (e.g., at search time, index time, ingestion time, etc.).”, col 7, lines 38-45, “The system parses the machine data to produce events each having a portion of machine data associated with a timestamp.”, col 8, lines 42-62, “the data intake and query system, a field extractor to learns more about the data in the events”), at least a portion of the data indicative of events being inherently partitioned based on the associated plurality of entities (col 59, lines 31-67 & col 60, lines 1-44, “inheriting a dataset and/or rule”, “one or more datasets and/or rules can be inherited automatically”, “a dataset 608 in a dataset association record 602 can be imported or inherited from another dataset association record 602.”): and
assign arrival timestamps to the data indicative of events using a distributed timestamp assignment (col 6, lines 53-67, “In general, each event has a portion of machine data that is associated with a timestamp that is derived from the portion of machine data in the event. A timestamp of an event may be determined through interpolation between temporally proximate events having known timestamps or may be determined based on other configurable rules for associating timestamps with events.”, col 7, lines 5-19, “For example, when the data source is an operating system log, an event can include one or more lines from the operating system log containing machine data that includes different types of performance and diagnostic information associated with a specific point in time (e.g., a timestamp).”, col 7, lines 38-45, “The system parses the machine data to produce events each having a portion of machine data associated with a timestamp.”, col 15, lines 48-60, “As part of processing the data, the indexing system can identify timestamps associated with the data, organize the data into buckets or time series buckets”),
Esman teaches wherein the arrival timestamps comprise a time component (col 6, lines 53-67, “An event comprises a portion of machine data and is associated with a specific point in time. The portion of machine data may reflect activity in an IT environment and may be produced by a component of that IT environment. Events may be derived from “time series data,” where the time series data comprises a sequence of data points (e.g., performance measurements from a computer system, etc.) that are associated with successive points in time.”), a unique machine identification (ID) (col 7, lines 5-19, “For example, when the data source is an operating system log, an event can include one or more lines from the operating system log containing machine data that includes different types of performance and diagnostic information associated with a specific point in time (e.g., a timestamp).”), and a sequence number (col 6, lines 53-67, “An event comprises a portion of machine data and is associated with a specific point in time. Events may be derived from “time series data,” where the time series data comprises a sequence of data points (e.g., performance measurements from a computer system, etc.) that are associated with successive points in time.”, “the information can include location information regarding the data that was stored to the common storage 216, bucket identifiers of the buckets that were copied to common storage 216, as well as additional information, e.g., in implementations in which the ingestion buffer 310 uses sequences of records as the form for data storage, the list of record sequence numbers that were used as part of those buckets that were copied to common storage 216.”);
at least one database configured to store the data indicative of events (Fig 2, “data stores”, col 8, lines 1-13, “pre-specified data items may be extracted from the machine data and stored in a database to facilitate efficient retrieval and analysis of those data items at search time.”); and
at least one computing node in communication with the at least one database, wherein the at least one computing node is configured at least to (Fig 2, query system 214 with a gateway 215 implemented using an application programming interface (API), see also Fig 5, query system 214 with search nodes 506):
receive at a first time and by way of an application programming interface (API), first information indicative of a first user query (col 17, lines 11-24, “the query system 214 makes requests to and receives data from the data store catalog 220 using an application programming interface (“API”).”, col 19, lines 15-55, “one or more components of the data intake and query system 108 can include their own API. In such embodiments, the gateway 215 can communicate with the API of a component of the data intake and query system 108. Accordingly, the gateway 215 can translate requests received from an external device into a command understood by the API of the specific component of the data intake and query system 108.”, col 6, lines 38-67, “using an event-based data intake and query system (a first indication of the first user query), such as the SPLUNK® ENTERPRISE system “, Fig 15, step 1502, col 92, lines 64-67 & col 93, lines 1-23, “the search head 504 can determine whether the query was submitted by an authenticated user and/or review the query to determine that it is in a proper format for the data intake and query system 108, has correct semantics and syntax, etc. (a first indication of the first user query).”);
generate, based at least on the data indicative of events and the arrival timestamps assigned to the data indicative of events, retrieved from the at least one database based on the first information indicative of the first user query, results associated with the first user query, wherein the results comprise one or more feature vectors for use with a machine learning algorithm (col 119, lines 16-29, “token entries 2911 illustrated in inverted index 2907B, can include a token 2911A (e.g., “error,” “itemID,” etc.) and event references 2911B indicative of events that include the token.”, col 127, lines 47-38, “At block 3010, the search head 504 combines the partial results and/or events received from the search nodes 506 to produce a final result for the query. In some examples, the results of the query are indicative of performance or security of the IT environment and may help improve the performance of components in the IT environment.”, col 139, lines 25-45 “the number and type of features included in the users' features vectors can be configured using first feature filters 3303 and second feature filters 3304 and the feature rendering filter 3305 of the dataset filters 3302.”, col 139, lines 46-67, “the plurality of feature vectors are generated based on execution of a filter query against timestamped event data stored by the data intake and query system. the medication security analytics application uses a machine learning or other type of algorithm to generate the visualization of user behavior information 3307 by reducing the number of dimensions of the clustered plurality of feature vectors to three dimensions.”); and
cause storage of data indicative of the results in a feature store for use by the machine learning algorithm in training a model (col 139, lines 46-67, “the plurality of feature vectors are generated based on execution of a filter query against timestamped event data stored by the data intake and query system. the medication security analytics application uses a machine learning or other type of algorithm to generate the visualization of user behavior information 3307 by reducing the number of dimensions of the clustered plurality of feature vectors to three dimensions.”, Fig 32, col 138, lines 25-42, “In the example of FIG. 32, the detailed dataset results list 3216 displays the 250 most recent interactions with the medication dispensing system involving opioids within the selected timeframe. In one embodiment, the detailed dataset results list 3216 includes the interaction time, user identifier information, an in/out indication, in/out time information, witness user identifier, patient identifier, transaction type, medication/prescription order identifier, medication name, medication control level, an opioid indicator, medication location, user department information, user title, etc.”);
per definition “In database systems, "materializing a query" refers to the process of storing the results of a query as a physical table in the database.”
Esman implicitly teaches wherein the API is further configured to receive, from a user including a first user, a specific request to materialize results of the first user query to an external feature store, wherein materializing comprises persisting the results of the first user query as a named dataset in the external feature store, such that the named dataset is accessible for subsequent queries (col 8, lines 42-62, “the data intake and query system maintains the underlying machine data and uses a late-binding schema for searching the machine data, it enables a user to continue investigating and learn valuable insights about the machine data.”, col 15, lines 48-60, “the indexing system 212 can update the data store catalog 220 with information related to the buckets (pre-merged or merged) or data that is stored in common storage 216, and can communicate with the intake system 210 about the status of the data storage”, col 46, lines 51-46, “Query results in the query acceleration data store 222 can be updated as additional query results are obtained.”, col 25, lines 4-10, “The intake system 210 is illustratively configured to ensure message resiliency, such that data is persisted in the event of failures within the intake system 310”, col 34, lines 18-29, “the intake system 210 can retain or persistently make available the sent data until the intake system 210 receives an acknowledgement from the indexing system 212 that the sent data has been processed, stored in persistent storage (e.g., common storage 216), or is safe to be deleted”, col 42, lines 30-42, “he data store catalog 220 (a named dataset)”, col 56, lines 23-44, “Query Acceleration Data Store”, col 94, lines 27-34, “the search manager 514 can identify search nodes 506 using a search node mapping policy, previous mappings, previous searches, or the contents of a data store associated with the search nodes 506.”, col 119, lines 5-42, col 147, lines 8-31,“ data that is indexed may be stored in buckets, which may be stored in a persistent storage once certain bucket requirements have been met, and retrieved as needed for searching.” “Token entries”, in line with applicant instant pre-pub paragraphs [0128-0129], [153-154]), and
wherein, in response to a subsequent explicit materialization request for the same first user query received via the API from the user including the first user, the method further comprises: overwriting the previously materialized named dataset for the first user query in the external feature store with updated results (col 34, lines 30-38, “As the indexing system 212 stores the data in common storage 216…By moving the marker, the intake system 210 can indicate that the previously-identified data has been stored in common storage 216, can be deleted from the intake system 210 or, otherwise, can be allowed to be overwritten, lost, etc.”, see also col 38, lines 19-25).
However, LIoyd also explicitly teaches assign arrival timestamps to the data indicative of events using a distributed timestamp assignment (Abstract, [0037], [0049]), wherein the arrival timestamps comprise a time component, a unique machine identification (ID), and a sequence number ([0014], [0029], “The user interface either allows the user to provide a name for the new model specification or automatically generates a name of the new model specification, such as appending the current date/time and/or a version number to the name of the existing model specification.”): and
wherein the API is further configured to receive, from a user including a first user, a specific request to materialize results of the first user query to an external feature store, wherein materializing comprises persisting the results of the first user query as a named dataset in the external feature store, such that the named dataset is accessible for subsequent queries ([0014], [0017], “snippet from feature registry 140”, [0018], “Feature registry 140 may be stored in persistent storage 130. Feature engine 150 may access a copy of feature registry 140 in volatile memory. Alternatively, feature registry 140 may access feature engine 150 to compute derived features on demand.”, [0031], “a request from a client device (a specific request)”, [0032-0033], “feature engine 150 reads the corresponding feature values from a particular location in persistent storage 130. The particular location in persistent storage 130 may be determined based on a name of the corresponding feature and/or based on location information that is part of an entry, in feature registry 140, that corresponds to the corresponding feature.”, [0037], “In training mode, the feature vectors (a named dataset) may be stored in persistent storage and then consumed by the training module.”, [0039-0040], “feature engine 150 includes an API that may be called that takes, as parameters, a name of a model specification and a mode indicator that indicates which mode of feature engine 150 is being invoked. In a call invoking feature engine 150, if feature engine 150 receives a name of an existing model and the call indicates the training mode, then feature engine 150 may respond by requesting a user that initiated the call to confirm whether the user desires a new model be created”, [0056], “an example of a schema for a set of feature values stored in persistent storage 130”) ([0061] As described above, feature transformations may be applied to existing features while generating feature vectors (a named dataset) for subsequent training or scoring.); and
wherein, in response to a subsequent explicit materialization request for the same first user query received via the API from the user including the first user, the at least one computing node is further configured to: overwrite the previously materialized named dataset for the first user query in the external feature store with updated data results ([0014], “Some feature generation jobs 110-114 may run regularly, such as hourly, daily, weekly, or monthly and analyze data that is associated with a timestamp in a certain time window, such as the last hour, the last day, the last week, the last 30 days, etc.”, [0039-0040], “feature engine 150 includes an API that may be called that takes, as parameters, a name of a model specification and a mode indicator that indicates which mode of feature engine 150 is being invoked. In a call invoking feature engine 150, if feature engine 150 receives a name of an existing model and the call indicates the training mode, then feature engine 150 may respond by requesting a user that initiated the call to confirm whether the user desires a new model be created. in which case a new name can be recommended to the user, or whether the user desires the model to score a set of users, a set of items, or a set of user-item pairings.”, see also [0062], “Feature Transformations” and [0067], “abstract model, there is one abstract list that is overwritten in order to define a new model.”).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the concept of regenerating the behavior vector to include the updated behavior feature, and applying the regenerated behavior vector to the classifier model suggested in LIoyd’s system into Esman and by incorporating LIoyd into Esman because both systems are related to feature engineering system would provide a new feature generation framework that reduces errors in models and increases model iteration velocity (LIoyd, [0001]).
15. Regarding claim 2, Esman and LIoyd teach the invention as claimed in claim 1 above and further Esman teaches wherein the at least one computing node is further configured to: determine, based on runtime information and during the generation of the results associated with the first user query, an error associated with the first user query; and cause sending of an indication of the error to at least one user device associated with the first user query (col 26, lines 52-67, “The notable event topic 344 may be intended to store messages holding data that indicates a notable event at a data source 202 (e.g., the occurrence of an error or other notable event) … The mobile alerts topic 350 may be intended to store messages holding data for which an end user has requested alerts on a mobile device. A variety of custom topics 352A through 352N may be intended to hold data relevant to end-user-created topics.”, col 42, lines 6-21, “if the data store catalog 220 includes keyword pairs, it can use the keyword: Error to identify buckets that have at least one event that include the keyword Error.”, col 59, lines 22-30, “if a query identifies a dataset association record 602 for use but references datasets or rules of another dataset association record 602, the data intake and query system 108 can indicate an error.”, col 66, lines 16-41, “a first rule may be provided by a first system to transform a message according to the knowledge of that system (e.g., transforming an error code into an error descriptor), while a second rule may process the message according to the transformation (e.g., by detecting that the error descriptor satisfies alert criteria).” Col 116, lines 14-39, “Event 2934 is associated with an entry in a server error log, as indicated by “error.log” in the source column 2937 that records errors that the server encountered when processing a client request. Similar to the events related to the server access log, all the raw machine data in the error log file pertaining to event 2934 can be preserved and stored as part of the event 2934.”).
16. Regarding claim 3, Esman and LIoyd teach the invention as claimed in claim 1 above and further Esman teaches wherein the at least one computing node is further configured to: receive at least one access-control list (ACL), wherein the at least one ACL indicates at least one of: users that have access to specific data fields within the system; and at least one requirement that data fields within the system be operated on in specific ways (col 57, lines 5-15, “since the query acceleration data store 222 can be utilized to service requests from different client devices 204, the query acceleration data store 222 can implement access controls (e.g., an access control list) with respect to the stored datasets”).
17. Regarding claim 4, Esman and LIoyd teach the invention as claimed in claim 1 above and further Esman teaches wherein the at least one computing node is further configured to: generate a token associated with the first information indicative of the first user query and the results associated with the first user query; receive, at a second time and by way of the API, a second information indicative of a second user query, wherein the second time occurs after the first time; generate, based at least on the data indicative of events, the arrival timestamps assigned to the data indicative of events, the token, the second information indicative of the second user query, the results associated with the first user query, and the first information indicative of the first user query, additional results associated with the second user query, wherein the additional results comprise one or more additional feature vectors for use with the machine learning algorithm; and cause storage of data indicative of the additional results in a-the feature store for use by the machine learning algorithm in training a model (col 54, lines 47-67, “requests for buckets may include a tenant identifier and some form of user authentication, e.g., a user access token that can be authenticated by authentication service.”, col 119, lines 5-42, col 120, lines 51-67, col 121, lines 13-27, col 124, lines 63-67 & col 125, lines 1-5).
18. Regarding claim 6, Esman and LIoyd teach the invention as claimed in claim 1 above and further Esman teaches wherein the first user query is associated with a token, the token indicating a state of the system at which the at least one computing node is to generate the results associated with the first user query (col 54, lines 47-67, “requests for buckets may include a tenant identifier and some form of user authentication, e.g., a user access token that can be authenticated by authentication service.”, col 119, lines 5-42, col 120, lines 51-67, col 121, lines 13-27, col 124, lines 63-67 & col 125, lines 1-5).
19. Regarding claim 7, Esman and LIoyd teach the invention as claimed in claim 1 above and further Esman teaches wherein the API employs a plurality of client libraries, each of the plurality of client libraries providing interfaces that interact with one or more predefined data science tools using methods associated with the API (col 17, lines 11-24, “the query system 214 makes requests to and receives data from the data store catalog 220 using an application programming interface (“API”).”, col 19, lines 15-55, “one or more components of the data intake and query system 108 can include their own API. In such embodiments, the gateway 215 can communicate with the API of a component of the data intake and query system 108. Accordingly, the gateway 215 can translate requests received from an external device into a command understood by the API of the specific component of the data intake and query system 108.”).
20. Regarding claim 21, Esman and LIoyd teach the invention as claimed in claim 4 above and further LIoyd teaches wherein:
the first information indicative of the first user query comprises a first configuration (Abstract, [0011], [0015]);
the second indication of the second user query comprises a second configuration (Abstract, [0011], [0015]);
the first configuration comprises a first entity configuration, a first point-in-time configuration, and a first sample configuration (Abstract, [0011], [0017], [0019], features are computed/configured for the users);
the second configuration comprises a second entity configuration, a second point-in-time configuration, and a second sample configuration (Abstract, [0011], [0017], [0019], features are computed/configured for the users);
the at least one computing node persisting the results of the first user query as the named dataset further comprises assigning a persisted timestamp to the named dataset (Abstract, [0037], [0049], “generation the feature vectors which comprises values for each of the features in the feature vector”);
the external feature store comprises a plurality of named datasets; and the at least one computing node is further configured to: deploy the first configuration and assign a first deployment time to the first configuration in response to deployment of the first configuration ; generate the results associated with the first user query further based on the first configuration ([0014], [0017], “snippet from feature registry 140”, [0018], “Feature registry 140 may be stored in persistent storage 130. Feature engine 150 may access a copy of feature registry 140 in volatile memory. Alternatively, feature registry 140 may access feature engine 150 to compute derived features on demand.”, [0031], “a request from a client device (a specific request)”, [0032-0033], “feature engine 150 reads the corresponding feature values from a particular location in persistent storage 130. The particular location in persistent storage 130 may be determined based on a name of the corresponding feature and/or based on location information that is part of an entry, in feature registry 140, that corresponds to the corresponding feature.”, [0037], “In training mode, the feature vectors (a named dataset) may be stored in persistent storage and then consumed by the training module.”, [0039-0040], “feature engine 150 includes an API that may be called that takes, as parameters, a name of a model specification and a mode indicator that indicates which mode of feature engine 150 is being invoked. In a call invoking feature engine 150, if feature engine 150 receives a name of an existing model and the call indicates the training mode, then feature engine 150 may respond by requesting a user that initiated the call to confirm whether the user desires a new model be created”, [0056], “an example of a schema for a set of feature values stored in persistent storage 130”) ([0061] As described above, feature transformations may be applied to existing features while generating feature vectors (a named dataset) for subsequent training or scoring.);
determine a difference between the first configuration and the second configuration to define the second configuration as a configuration update ([0039-0040]);
deploy the configuration update and assign a second deployment time to the configuration update in response to deployment of the configuration update ([0039-0040]);
compare the persisted timestamps of the plurality of named datasets with the second deployment time of the configuration update to identify outdated named datasets and to identify portions of the data indicative of events associated with the outdated named datasets identified (Fig 1, [0039-0040]);
generate and updated results based on the portions of the data indicative of events associated with the outdated named dataset identified and the configuration update; and persist the updated results as updated named datasets to replace the outdated named datasets identified from the plurality of named datasets in the external feature store ([0014], “Some feature generation jobs 110-114 may run regularly, such as hourly, daily, weekly, or monthly and analyze data that is associated with a timestamp in a certain time window, such as the last hour, the last day, the last week, the last 30 days, etc.”, [0039-0040], “feature engine 150 includes an API that may be called that takes, as parameters, a name of a model specification and a mode indicator that indicates which mode of feature engine 150 is being invoked. In a call invoking feature engine 150, if feature engine 150 receives a name of an existing model and the call indicates the training mode, then feature engine 150 may respond by requesting a user that initiated the call to confirm whether the user desires a new model be created. in which case a new name can be recommended to the user, or whether the user desires the model to score a set of users, a set of items, or a set of user-item pairings.”, see also [0062], “Feature Transformations” and [0067], “abstract model, there is one abstract list that is overwritten in order to define a new model.”).
21. Regarding claims 8-11, 13, 14, those claims recite a method performs the method of system claims 1-4, 6, 7 respectively and are rejected under the same rationale.
22. Regarding claims 15-18 and 20, those claims recite a non-transitory computer-readable medium storing instructions that, when executed, cause operations perform the method of claims 1-4, 6, 7 respectively and are rejected under the same rationale.
Respond to Amendments and Arguments
23. In the remarks received 02/13/2026, Applicant has amended claims 1-4, 6, 8, 9, 11, 13, 15, 16, 18, and 20, newly added claim 21, and cancelled claims 5, 12, and 19 from further consideration in this application to facilitate expeditious prosecution of the application and argued that Esman in view of Christodorescu do not not teach the invention recited in the Claims, for a number of reasons, including but not limited to, the following:
at least in light of the subject matter recited by independent claim 8 as amended, Applicant submits that the claims recite subject matter that is neither disclosed nor suggested by Esman, Christodorescu, or any of the other cited references. More specifically, Applicant submits that the cited references at least fail to disclose or suggest "assigning arrival timestamps to the data indicative of events by using a distributed timestamp assignment, wherein the arrival timestamps comprise a time component, a unique machine identification (ID), and a sequence number ..." as recited in claim 8 as amended. (Emphasis added.) In contrast, the Esman reference at most discloses "[a] timestamp of an event may be determined through interpolation between temporally proximate events having known timestamps or may be determined based on other configurable rules for associating timestamps with events" (Esman, Col. 6, 1. 65-67 & Col. 7, 1. 1-4), "[t]he system parses the machine data to produce events each having a portion of machine data associated with a timestamp" (Esman, Col. 7, 1. 42-44), "[u]sing the received data, the indexing system can generate events that include a portion of machine data associated with a timestamp and store the events in buckets based on one or more of the timestamps, tenants, indexes, etc., associated with the data" (Esman, Col. 29, 1. 9-15), and "[a]t block 2908, the indexing system 212 determines a timestamp for each event ..[the] indexing system 212 may again refer to a source type definition associated with the data to locate one or more properties that indicate instructions for determining a timestamp for each event. The properties may, for example, instruct the indexing system 212 to extract a time value from a portion of data for the event, to interpolate time values based on timestamps associated with temporally proximate events, to create a timestamp based on a time the portion of machine data was received or generated, to use the timestamp of a previous event, or use any other rules for determining timestamps" (Esman, Col. 114, 1. 65-67 & Col. 115, 1. 1-10).
Accordingly, Applicant submits that the cited references fail to disclose or suggest the subject matter recited in independent claim 8 as amended and that the rejection is overcome. Therefore, Applicant respectfully requests Examiner to withdraw this rejection to the claims.
24. Applicant’s 35 U.S.C. § 103 arguments on claims 1-4, 6-11, 13-18 and 20-21 has been fully considered but are moot in view of the new ground of rejection necessitated by applicant’s amendment presented above, 35 USC § 103.
CONCLUSION
The Applicant’s amendment necessitated a new ground of rejection. Therefore, THIS ACTION IS MADE FINAL. Applicants are reminded of the extension of time policy as set forth in 37 C.F.R. § 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HICHAM SKHOUN whose telephone number is (571)272-9466. The examiner can normally be reached Normal schedule: Mon-Fri 10am-6:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amy Ng can be reached at 5712701698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HICHAM SKHOUN/Primary Examiner, Art Unit 2164