DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 10 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Regarding Claim 10, there is insufficient antecedent basis for “second target data feedback", please specify what is “second target data feedback” by for example placing prior to “second target data feedback”, that is “a first target data feedback” to help establish the antecedent basis for the second target data feedback. There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-6,9, and 11-19 are rejected under 35 U.S.C. 103 as being unpatentable over US 20170093915 A1; Ellis; Keith A. et al. (hereinafter Ellis) in view of US 20180095855 A1; SANAKKAYALA; Santhosh et al. (hereinafter San).
Regarding claim 1, Ellis teach A data processing method, applied to a server, comprising: receiving a tag operation instruction for a data file (Ellis [0014] In operation, the example local policy resolution manager 104 determines whether a policy update trigger has occurred and, if so, the example smart building gateway 102 manages the policy trigger. For example, in the event a user desires to change the manner in which personal information from one or more sensors or actuators is handled, then the example smart building gateway 102 provides a user interface to allow one or more tags associated with the one or more sensors/actuators to be configured, [0016] As described above, each device is associated with tag parameters to define how the device is to operate and access privileges afforded to different entities requesting information or control attempts of the device. The example tag manager 106 generates and manages a household tag table 200, as shown in FIG. 2...[0028] a request from the user access device 124 (or a computing device providing valid logon credentials) to update and/or modify one or more tag parameters 202 associated with one or more IoT devices. In response to such a request, the example tag manager enables a user interface (e.g., a web-based GUI provided by the local user access device interface 122) to facilitate user input/interaction with the example household tag table [51-56] elaborates on receiving a tag operation instruction for a data file [FIG.1] shows overall system which receives tag operation instruction for a data file) and storing tag operation information associated with the tag operation instruction in an operation log; (Ellis [0011] a local storage 126 that contains, in part, a metadata database 128, which may store household tag information (e.g., spatial context information related to sensor(s), manufacture information related to sensor(s), tag values, etc.), and an end node database 130, which may store sensor data (e.g., sensor information, actuator acknowledgement/state information, etc.). The example system 100 also includes a local broker interface 132 communicatively connected to a cloud server 134 (sometimes referred to herein as a remote cloud server) via a network 136, such as the Internet. [0028]one or more tag parameters 202 are stored in the example metadata database 128 of the local storage 126, and the example local policy resolution manager 104 determines whether such changes require propagation of modified tag parameters 202 to the example cloud server [36-38] further elaborates on storing tag operation information [FIG.1] shows storing tag operation information associated with the tag operation instruction in an operation log) updating a data operation table associated with the data file according to the tag operation information recorded in the operation log in a case where the operation log meets a log information processing condition; (Ellis [0026] if the group parameters include a tag publish policy “Global,” then the example policy resolution manager 104 transmits the group parameters via the example local broker interface 132 to the example cloud server 134 so that the example cloud storage 144 can update the cloud tag data storage 146 associated with the newly added IoT device(s). if the group parameters include a tag publish policy “Global,” then the example policy resolution manager 104 transmits the group parameters via the example local broker interface 132 to the example cloud server 134 so that the example cloud storage 144 can update the cloud tag data storage 146 associated with the newly added IoT device(s). [0027] When the example local policy resolution manager 104 detects that a third party transmits a notification related to a service opportunity and/or change that is targeted and/or otherwise associated with an IoT device for a user, the example tag manager 106 queries the example tag incentive notification 224 parameter. If the example tag incentive notification 224 parameter reflects a permissive value (e.g., “Yes,” “True,” “1,” etc.), then the notification sent by the third party is allowed to be forwarded to the user access device 124 for consideration. If the user agrees to the incentive, then the example tag manager 106 updates corresponding tag parameters 202 to reflect a desire to participate.[0028] a request from the user access device 124 (or a computing device providing valid logon credentials) to update and/or modify one or more tag parameters 202 associated with one or more IoT devices. In response to such a request, the example tag manager enables a user interface (e.g., a web-based GUI provided by the local user access device interface 122) to facilitate user input/interaction with the example household tag table 200, and the example local policy resolution manager 104 enables viewing and/or modification rights for only those IoT device(s) for which an authorized user is associated [29-30&49-51] provide further details on updating a data operation table associated with the data file according to the tag operation information recorded in the operation log in a case where the operation log meets a log information processing condition [FIG.1] shows the system capable of updating the data operation table associated with the data file) Ellis lacks explicitly and orderly teaching generating a data operation request corresponding to a target data node based on an updated data operation table, and sending the data operation request to the target data node However San teaches generating a data operation request corresponding to a target data node based on an updated data operation table, and sending the data operation request to the target data node (San [0011] The heartbeat monitor nodes communicate with each other by updating certain specially-configured data files that reside within a distributed file system having an instance on each heartbeat monitor node. The illustrative data files are specially configured to comprise information needed for managing heartbeat monitoring and for communicating information among monitor nodes, e.g., each worker node's current list of target VMs, indications of failed target VMs, network and addressing information for the target VMs, etc. The updated data files are promulgated to all heartbeat monitor nodes by the distributed file system. Thanks to so-called “watch” processes, changes received in the updated data files are detected by each heartbeat monitor node, thus serving as a way of communicating information among heartbeat monitor nodes. Specially configured watch processes detect whether quorum member nodes have failed, whether any worker monitor nodes have failed, as well as detecting other important changes in the system.[0333] The information in the worker-to-VM mapping 6130 that results from executing VM distribution logic 608 at the master node is then distributed to the respective worker nodes using the illustrative VM heartbeat monitoring distributed file system 545, e.g., using data file 712. When changes to worker-to-VM mapping 6130 occur, e.g., due to a failover operation and/or changes in master/worker/observer node roles, the changes are likewise distributed using the illustrative VM heartbeat monitoring distributed file system 545 and changes are detected using the watch processes implemented therein. See also FIGS. 7, 8, 9. [359] node has to send data to one or more worker monitor nodes it updates the “From” field in data structure 802 to “master” and makes other suitable changes in data file 712, e.g., a change in All_VM list for a given worker. Worker nodes “receive” this message by detecting changes to data file 712 via watch processes, e.g., 923, and then will process the updated content of data file 712. Likewise, if a worker monitor node has to send data to the master monitor node, the worker updates the “From” field in data structure 802 to “worker ID” and makes other suitable changes in data file 712, e.g., updating its Failed_VM list to indicate that certain of its target VMs are confirmed failed. Master “receives” this message by detecting changes to data file 712 via watch processes, e.g., 922, and then will process the updated content as appropriate, e.g., notifying storage manager 340 to call failover for the target VMs confirmed failed. Thus, updates to data files 712 provide node-to-node communications. [475-479] elaborate on the matter [FIG.1E & 7] show the system which can generate a data operation request corresponding to a target node) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of San order to create a more efficient system for node communication and updates (San [0003] Businesses recognize the commercial value of their data and seek reliable, cost-effective ways to protect the information stored on their computer networks while minimizing impact on productivity. A company might back up critical computing systems such as databases, file servers, web servers, virtual machines, and so on as part of a daily, weekly, or monthly maintenance schedule. The company may similarly protect computing systems used by its employees, such as those used by an accounting department, marketing department, engineering department, and so forth. Given the rapidly expanding volume of data under management, companies also continue to seek innovative techniques for managing data growth, for example by migrating data to lower-cost storage over time, reducing redundant data, pruning lower priority data, etc. Enterprises also increasingly view their stored data as a valuable asset and look for solutions that leverage their data. For instance, data analysis capabilities, information management, improved data presentation and access features, and the like, are in increasing demand. [0007] The illustrative VM heartbeat monitoring system comprises an illustrative ping monitoring logic that worker monitor nodes use for determining whether their target VMs are operational. To optimize operational efficiency, a master monitor node can be configured to also operate as a worker monitor node, thus performing a dual role. To further optimize operational efficiency, the VMs targeted for heartbeat monitoring are assigned (distributed) to available worker nodes based on an illustrative VM distribution logic that favors monitor nodes which are “close to” the target VMs from a network topology perspective, e.g., same-network, same-server, low hop count, low round-trip latency, etc. To further optimize operational efficiency [0047] With the increasing importance of protecting and leveraging data, organizations simply cannot risk losing critical data. Moreover, runaway data growth and other modern realities make protecting and managing data increasingly difficult. There is therefore a need for efficient, powerful, and user-friendly solutions for protecting and managing data and for smart and efficient management of data storage.)
Corresponding product claim 13 is rejected similarly as claim 1 above. Additional Limitations: computer readable medium capable of reading and executing instructions (Ellis [FIG.9] shows storing computer executable instructions which, when executed by a processor, implement steps of the data processing method according to claim 1 [0038] FIGS. 1 and 2 may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware...FIGS. 1 and 2 is/are hereby expressly defined to include a tangible computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. storing the software and/or firmware. [40-41] elaborate on the matter)
Regarding claim 2, Ellis and San teach The method according to claim 1, wherein the updating the data operation table associated with the data file according to the tag operation information recorded in the operation log comprises: reading the tag operation information recorded in the operation log, (Ellis [0011] a metadata database 128, which may store household tag information (e.g., spatial context information related to sensor(s), manufacture information related to sensor(s), tag values, etc.), and an end node database 130, [0019]tag publish policy 212 parameter with a value of “Global” will allow at least one third party to receive and/or otherwise retrieve data associated with the corresponding device. In the event a user associates an IoT wattage meter device with the tag publish policy “Global,” then an example power utility company can monitor the power consumption of that device for a period of time (e.g., one month). [0019] In the illustrated example of FIG. 2, any tag publish policy 212 parameter with a value of “Local” will not, by default, allow any third party to receive and/or otherwise retrieve data associated with the corresponding device. On the other hand, any example tag publish policy 212 parameter with a value of “Global” will allow at least one third party to receive and/or otherwise retrieve data associated with the corresponding device.[0033] The example tag manager 106 evaluates the retrieved device identification information associated with the IoT device to determine whether the tag publish policy 212 parameter is either “Local” or “Global.” If the IoT device is “Local,” then the example local policy resolution manager 104 prevents and/or otherwise blocks publication to any third party (e.g., power utility company). However, and as described above, if the IoT device is classified as “Local” and if the example tag network authorization 214 parameter permits data access by an authorized user access device 124 [37-47] elaborates on the matter [FIG.1] shows the system for reading the tag operation information recorded in the operation log) and determining data interval information associated with the data file according to the tag operation information, wherein the data interval information is associated with a data interval contained in the data file; (San [0210] For example, an information management policy may specify certain requirements (e.g., that a storage device should maintain a certain amount of free space, that secondary copies should occur at a particular interval, that data should be aged and migrated to other storage after a particular period, that data on a secondary volume should always have a certain level [0456] At block 2026, which is a decision point, the Packet Analyzing Logic determines whether one or more (as many as ten, illustratively) responsive echo reply packets were received from a given target VM within a predefined timeout interval, e.g., five seconds. [457] each followed by the pre-defined timeout interval (e.g., five seconds). Thus, control passes back to block 2022. [482] a predefined timeout interval [484-487] about on having data within a specific interval [FIG.1E] shows corresponding visual on the matter) determining data block operation information according to the data interval information (San [210] storage operation cells and other system information in its management database 146 and/or index 150 (or in another location). The master storage manager 140 or other component may also determine whether certain storage-related or other criteria are satisfied, and may perform an action or trigger event (e.g., data migration) in response to the criteria being satisfied, such as where a storage threshold is met for a particular volume, or where inadequate protection exists for certain data. For instance, data from one or more storage operation cells is used to dynamically and automatically mitigate recognized risks, and/or to advise users of risks or suggest actions to mitigate these risks. For example, an information management policy may specify certain requirements (e.g., that a storage device should maintain a certain amount of free space, that secondary copies should occur at a particular interval, that data should be aged and migrated to other storage after a particular period, that data on a secondary volume should always have a certain level of availability and be restorable within a given time period [0456] At block 2026, which is a decision point, the Packet Analyzing Logic determines whether one or more (as many as ten, illustratively) responsive echo reply packets were received from a given target VM within a predefined timeout interval, e.g., five seconds. If so, the target VM is deemed to be operational and control passes to block 2030 to compute the response rate. Otherwise, when no responses are received from a given target VM, control passes to block 2028 to enable retries. [457 & 482] elaborate on the matter [FIG.1E] shows corresponding visual on the matter) and updating the data operation table associated with the data file according to the data block operation information (Ellis [0026] updates to the example household tag table 200 may be accomplished via a web-based device (e.g., a personal computer) or the user access device 124. [0028] In some examples, the local user access device interface 122 receives a request from the user access device 124 (or a computing device providing valid logon credentials) to update and/or modify one or more tag parameters 202 associated with one or more IoT devices. In response to such a request, the example tag manager enables a user interface (e.g., a web-based GUI provided by the local user access device interface 122) to facilitate user input/interaction with the example household tag table 200, and the example local policy resolution manager 104 enables viewing and/or modification rights for only those IoT device(s) for which an authorized user is associated. Changes to one or more tag parameters 202 are stored in the example metadata database 128 of the local storage 126, and the example local policy resolution manager 104 determines whether such changes require propagation of modified tag parameters 202 to the example cloud server 134. [0051] Additional detail associated with managing one or more tags (block 412) is shown in FIG. 7. In the illustrated example of FIG. 7, the example tag manager 106 transmits one or more portions of the example household tag table 200 to the user access device 124 (block 702). As described above, in response to a request to view and/or edit tag parameters 202, the example local user access device interface 122 may employ a web server or other user interface to facilitate viewing and/or editing of the example household tag table 200. [FIG.1] shows the system for updating the data operation table)
Corresponding product claim 15 is rejected similarly as claim 2 above.
Regarding claim 3, Ellis and San teach The method according to claim 2, wherein the generating the data operation request corresponding to the target data node based on the updated data operation table comprises: reading the data block operation information from the updated data operation table; (Ellis [0041] The program 300 of FIG. 3 begins at block 302 where the example local policy resolution manager 104 determines whether a policy update trigger has occurred. As described above, the policy update trigger may be caused by an IoT device (e.g., IoT sensor, IoT actuator, etc.) being added to the system 100 (e.g., detected by the example edge node interface 112), a new and/or modified service published by a third party (e.g., a power utility company) (e.g., detected by the example local policy resolution manager 104 and published to the example smart building manager 110), and/or tag parameter modification (e.g., detected by the example local policy resolution manager 104). In response to detecting a policy update trigger, the example smart building gateway 102 manages the trigger (block 304). However, if no policy update trigger is detected (block 302), then the example cloud server 134 and/or the example smart building gateway 102 determines if a runtime trigger has occurred (block 306). If so, then one or more of the example cloud server 134 or the example smart building gateway 102 manages the detected runtime trigger (block 308).[0042] Additional detail associated with managing the policy trigger of block 304 is shown in FIG. 4. In the illustrated example of FIG. 4, the example edge node interface 112 determines whether a new IoT device has been added (block 402). If so, the example smart building gateway 102 manages the device change (block 404), as described in further detail below. Additionally, if the example edge node interface 112 does not identify any device change activity (block 402), then the example local policy resolution manager 104 determines whether a service change has occurred (block 406). [0045] In the illustrated example of FIG. 5, the edge node interface 112 extracts publish information from a connected IoT device (block 502). In some examples, the publish information includes a model number, a serial number, a manufacturer name, an output unit, etc. The example device context manager 108 determines whether the publish information is associated with a user defined group (e.g., a user profile) (block 504), such as a group associated with spatial characteristics/context [46-52] elaborate on the matter) generating the data operation request corresponding to the target data node according to the physical address information contained in the data block operation information, wherein the data operation request is associated with a data block of the data file, (San [0090] Metadata can include, without limitation, one or more of the following: the data owner (e.g., the client or user that generates the data), the last modified time (e.g., the time of the most recent modification of the data object), a data object name (e.g., a file name), a data object size (e.g., a number of bytes of data), information about the content (e.g., an indication as to the existence of a particular search term), user-supplied tags, to/from information for email (e.g., an email sender, recipient, etc.), creation date, file type (e.g., format or application type), last accessed time, application type (e.g., type of application that generated the data object), location/network (e.g., a current, past or future location of the data object and network pathways to/from the data object), [0093] As it services users, each hosted service may generate additional data and metadata, which may be managed by system 100, e.g., as primary data 112. In some cases, the hosted services may be accessed using one of the applications 110. As an example, a hosted mail service may be accessed via browser running on a client computing [0094] create and store one or more secondary copies 116 of primary data 112 including its associated metadata. The secondary storage computing devices 106 and the secondary storage devices 108 may be referred to as secondary storage subsystem 118. [151] include metadata such as a list of the data objects (e.g., files/subdirectories, database objects, mailbox objects, etc.), a logical path to the secondary copy 116 on the corresponding secondary storage device 108, location information (e.g., offsets) indicating where the data objects are stored in the secondary storage device 108, when the data objects were created or modified, etc.[110] storage manager 140. Control information can generally include parameters and instructions for carrying out information management operations, such as, without limitation, instructions to perform a task associated with an operation, timing information specifying when to initiate a task, data path information specifying what components to communicate with or access in carrying out an operation, and the like. In other embodiments, some information management operations are controlled or initiated by other components of system [178] Each pointer points to a respective stored data block, so that collectively, the set of pointers reflect the storage location and state of the data object (e.g., file(s) or volume(s) or data set(s)) at the point in time when the snapshot copy was created. [FIG.1C & 1E] shows corresponding visual) and the data block is stored in the target data node. (San [0006] Upon detecting a target-VM failure and confirming the failure with the VM's host server and/or VM data center controller to ensure that the VM is really in a failed state that requires failover, the illustrative worker monitor node notifies the master monitor node, which in turn carries out its responsibility for notifying a storage manager of this and any other failed VMs in the system. The storage manager not only invokes and manages failover operations for the failed target VM(s) after receiving proper notice from the master monitor node, but also manages other storage management operations throughout the data storage management system, such as backups, replication, archiving, content indexing, restores, etc. Likewise, the storage manager manages failback operations from a site that was previously considered to be a failover destination back to the former source site, e.g., after the source data center recovers, after a failed over VM recovers, etc. [0155] Moreover, in some cases, one or more of the individual components of information management system 100 can be distributed to multiple separate computing devices. As one example, for large file systems where the amount of data stored in management database 146 is relatively large, database 146 may be migrated to or may otherwise reside on a specialized database server (e.g., an SQL server) separate from a server that implements the other functions of storage manager 140.[0352] FIG. 8 depicts a template for content of an illustrative data file 712 used in illustrative distributed file system 545. As explained elsewhere herein (e.g., FIG. 7), data files 712 are generally used for storing information that pertains to certain salient components of system 300 and thanks to the coordination and watch processes performed by the underlying Apache ZooKeeper infrastructure 601, changes in a given data file 712 stored in a given monitor node are communicated to other data files 712 in the other monitor nodes. [FIG.1C & 1E] shows corresponding visual)
Corresponding product claim 16 is rejected similarly as claim 3 above.
Regarding claim 4, Ellis and San teach The method according to claim 1, wherein the storing the tag operation information associated with the tag operation instruction in the operation log comprises: storing the tag operation information associated with the tag operation instruction in an operation log associated with the data file, wherein the operation log is created according to a historical operation instruction of the data file or using a target operation log associated with the data file as the operation log, and storing the tag operation information associated with the tag operation instruction in the operation log, wherein the target operation log is used to store tag operation information associated with at least two data files. (San [123] history maintenance, user security management, disaster recovery management, and/or user interfacing for system administrators and/or end users of system 100; [0124] sending, searching, and/or viewing of log files; and [0125] implementing operations management functionality [0233] Information management policies 148 can additionally specify or depend on historical or current criteria that may be used to determine which rules to apply to a particular data object, system component, or information management operation, such as:[0240] the current or historical storage capacity of various storage devices; [0241] the current or historical network capacity of network pathways connecting various components within the storage operation cell; [0242] access control lists or other security information; and [0243] the content of a particular data object (e.g., its textual content) or of metadata associated with the data object. [FIG.1E] shows corresponding visual)
Corresponding product claim 17 is rejected similarly as claim 4 above.
Regarding claim 5, Ellis and San teach The method according to claim 1, wherein the updating the data operation table associated with the data file according to the tag operation information recorded in the operation log in a case where the operation log meets the log information processing condition comprises: in a case where the tag operation information is newly added to the operation log, determining that the operation log meets the log information processing condition, and performing a step of updating the data operation table associated with the data file according to the tag operation information recorded in the operation log; or, querying log information contained in the operation log according to a preset log query cycle, in a case where it is determined, according to a query result, that the tag operation information is a newly added tag operation information, determining that the operation log meets the log information processing condition, and performing a step of updating the data operation table associated with the data file according to the tag operation information recorded in the operation log. (Ellis [0017] the tag owner 208 associated with Device 1 refers to “Jane/Sarah” to indicate that either of those household members may make changes to Device 1 (e.g., a shared device). For example, both Jane and Sarah may have corresponding usernames and passwords when accessing the example local policy resolution manager 104 of the policy management system 100. In the event household member Jane logs into the local policy resolution manager (e.g., via a network (e.g., Internet) personal computing device 124), then only Devices 1-5 would be viewable and/or otherwise available for modification because the tag owner 208 parameter associated with Device 6 is assigned to Sarah only. As described above, because examples disclosed herein allow each device to have unique parameter tags to define device behavior, IoT solutions may be implemented without concern that certain types of personal information will be disclosed without permission and/or that certain types of personal information will be outside user control. [0026] parameters include a tag publish policy “Global,” then the example policy resolution manager 104 transmits the group parameters via the example local broker interface 132 to the example cloud server 134 so that the example cloud storage 144 can update the cloud tag data storage 146 associated with the newly added IoT device(s)... updates to the example household tag table 200 may be accomplished via a web-based device (e.g., a personal computer) or the user access device [0028] to update and/or modify one or more tag parameters 202 associated with one or more IoT devices. In response to such a request, the example tag manager enables a user interface (e.g., a web-based GUI provided by the local user access device interface 122) to facilitate user input/interaction with the example household tag table 200, and the example local policy resolution manager 104 enables viewing and/or modification rights for only those IoT device(s) for which an authorized user is associated. Changes to one or more tag parameters 202 are stored in the example metadata database 128 of the local storage 126, and the example local policy resolution manager 104 determines whether such changes require propagation of modified tag parameters 202 to the example cloud server 134. [29-30] elaborate on the matter [FIG.1] shows corresponding visual)
Corresponding product claim 18 is rejected similarly as claim 5 above.
Regarding claim 6, Ellis and San teach The method according to claim 1, after performing the step of sending the data operation request to the target data node, further comprising: determining operation status information based on an execution result of the target data node executing the data operation request, and recording the operation status information in the updated data operation table. (San [0088] a certain storage policy. A given client may thus comprise several subclients, each subclient associated with a different storage policy. For example, some files may form a first subclient that requires compression and deduplication and is associated with a first storage policy. Other files of the client may form a second subclient that requires a different retention schedule as well as encryption, and may be associated with a different, second storage policy. As a result, though the primary data may be generated by the same application 110 and may belong to one given client, portions of the data may be assigned to different subclients for distinct treatment by system 100. More detail on subclients is given in regard to storage policies below. [127] Storage manager 140 can process an information management policy 148 and/or index 150 and, based on the results, identify an information management operation to perform, identify the appropriate components in system 100 to be involved in the operation (e.g., client computing devices 102 and corresponding data agents 142, secondary storage computing devices 106 and corresponding media agents 144, etc.), establish connections to those components and/or between those components, and/or instruct and control those components to carry out the operation. In this manner, system 100 can translate stored information into coordinated activity among the various computing devices in system 100.[0206] System 100 generally organizes and catalogues the results into a content index, which may be stored within media agent database 152, for example. The content index can also include the storage locations of or pointer references to indexed data in primary data 112 and/or secondary copies 116. Results may also be stored elsewhere in system 100 (e.g., in primary storage device 104 or in secondary storage device 108). Such content index data provides storage manager 140 or other components with an efficient mechanism for locating primary data 112 and/or secondary copies 116 of data objects that match particular criteria [FIG.1E] shows a corresponding visual)
Corresponding product claim 19 is rejected similarly as claim 6 above.
Regarding claim 9, Ellis and San teach The method according to claim 1, further comprising: determining the target data node, wherein the determining the target data node comprises: determining a file identifier corresponding to the data file, (San [175] a snapshot may generally capture the directory structure of an object in primary data 112 such as a file or volume or other data set at a particular moment in time and may also preserve file attributes and contents. A snapshot in some cases is created relatively quickly, e.g., substantially instantly, using a minimum amount of file space, but may still function as a conventional file system backup. [0222] Another type of information management policy 148 is an “audit policy” (or “security policy”), which comprises preferences, rules and/or criteria that protect sensitive data in system ...e.g., in metadata identifying a document [265] criteria, such as users who have created, accessed or modified a document or data object; file or application types; content or metadata keywords; clients or storage locations; dates of data creation and/or access; review status or other status within a workflow (e.g., reviewed or un-reviewed); modification times or types of modifications; and/or any other data attributes in any combination, without limitation. A classification rule may also be defined using other classification tags in the taxonomy. The various criteria used to define a classification rule may be combined in any suitable fashion, for example, via Boolean operators, to define a complex classification rule. As an example, an e-discovery classification policy might define a classification tag “privileged” that is associated with documents or data object [274 & 289] elaborate on the matter) determining a first data node among data nodes contained in a distributed file system according to the file identifier, and determining a second data node associated with the first data node among the data nodes contained in the distributed file system; taking the first data node and the second data node as target data nodes. (San [0292] System 200 can also be configured to allow for seamless addition of media agents 244 to grid 245 via automatic configuration. As one illustrative example, a storage manager (not shown) or other appropriate component may determine that it is appropriate to add an additional node to control tier 231, and perform some or all of the following: (i) assess the capabilities of a newly added or otherwise available computing device as satisfying a minimum criteria to be configured as or hosting a media agent in control tier 231; [0295] System 300 is also referred to herein as a “VM heartbeat monitoring system” at least because it comprises a plurality of heartbeat monitor nodes that monitor respective one or more target virtual machines (VMs). Because system 300 is also a data storage management system, certain components are configured to handle failover and failback operations for failed target VMs. [0416] At block 1704, which is a decision point, the master monitor node determines whether it received confirmation of failed target VM from a worker node, e.g., via watch in distributed file system 545 (see, e.g., FIG. 9 and block 1910). Confirmation of a failed target VM is generally required according to the illustrative embodiments, to ensure that failovers are called judiciously. Worker monitor nodes are responsible for confirming that target VMs are really in a failed state (see, e.g., FIG. 19). If no such confirmation is received, the master monitor node takes no action, and control loops back to the start of block 1704.[0427] At block 1816, which is a decision point, VM distribution logic 608 determines whether the round-trip ping latency from the identified worker monitor node to the present target VM is below an acceptability threshold. The acceptability threshold depends on the implemented network topography and will be administered as a parameter in management database 346. If not, the identified worker monitor node is deemed unsuitable for the present target VM and control passes back to block 1802 to identify another worker monitor node candidate; otherwise, control passes to block 1818. [476] and detecting, by the second worker monitor node, based on a change in the distributed file system [495-496] elaborates on the matter [FIG.1E] shows a overall visual of the system )
Regarding claim 11, Ellis and San teach The method according to claim 6, further comprising: querying the data operation table based on a preset data operation table query cycle; (Ellis [0010] In still other examples, for households that adopt one or more IoT solutions (e.g., IoT light switches, IoT door sensors, IoT security systems, etc.), query and/or actuation activities are typically routed through a cloud-based communication infrastructure. For example, in the event a household user attempts to open or close window shades with an IoT shade controller using a wireless device (e.g., a wireless telephone), then an actuation command is sent from the wireless device to either a home area network (HAN) or from a data service associated with the wireless device (e.g., via a cell-based data connection). [0015] If one or more policy update triggers does not occur, the example cloud server 134 or the example local policy resolution manager 104 identifies whether one or more runtime triggers occur. For example, in the event a device satisfies one or more operating thresholds (e.g., a temperature threshold, a door sensor change, a motion detector trigger, etc.), or if a device is queried (e.g., a local query for a sensor, a remote query for a sensor, a local switch activation, a remote switch activation, a etc.), then the example system 100 determines whether publication of the device is to be shared (e.g., shared with a utility company) or whether the device may be queried or controlled by the user access device 124. Such decisions are based on tag parameters associated with the device that has satisfied an operating threshold, is queried, or is requested to be controlled. [0022] detecting that a new IoT device is added to the system 100, thereby reducing an amount of time required to configure the IoT device. Additionally, the example tag manager 106 may populate the tag device type 218 parameter based on available device information stored on and/or otherwise available when a new IoT device is added to the system 100. In the illustrated example of FIG. 2, the tag device type 218 parameter for Device 1 is “Magnet Sensor” to reflect a device type. In some examples, the edge node interface 112 detects insertion or addition of an IoT device and extracts available model number and/or serial number information associated with the IoT device, and invokes a network (e.g., Internet) query to the network 136 via the example local broker interface 132 (e.g., via a keyword web search using the model/serial number). In the event additional information from the network 136 is received based on provided model [0031] a query and/or control attempt of one or more IoT devices while in the vicinity of the HAN 120. Still further, runtime triggers may be detected by the example cloud user access device interface 142 in response to the user access device 124 invoking a query and/or control attempt of one or more IoT devices while outside the vicinity of the HAN 120, such as when the example user access device 124 is communicatively connected to the remote network 143, a wide area network or a local area network (e.g., a coffee shop WiFi hotspot, a cell-based network associated with the user access device 124, etc.) [0035] receive data from one or more IoT devices... then the query from the user access device 124 may employ a cell-based network communication system to send the query request to the cloud server, and the resulting data from the query is returned to the user access device [35-36] elaborate on the matter [FIG.1] shows overall visual) determining target physical address information according to a query result, wherein the target physical address information is physical address information (San [0090] Metadata can include, without limitation, one or more of the following: the data owner (e.g., the client or user that generates the data), the last modified time (e.g., the time of the most recent modification of the data object), a data object name (e.g., a file name), a data object size (e.g., a number of bytes of data), information about the content (e.g., an indication as to the existence of a particular search term), user-supplied tags, to/from information for email (e.g., an email sender, recipient, etc.), creation date, file type (e.g., format or application type), last accessed time, application type (e.g., type of application that generated the data object), location/network (e.g., a current, past or future location of the data object and network pathways to/from the data object), [0093] As it services users, each hosted service may generate additional data and metadata, which may be managed by system 100, e.g., as primary data 112. In some cases, the hosted services may be accessed using one of the applications 110. As an example, a hosted mail service may be accessed via browser running on a client computing [0094] create and store one or more secondary copies 116 of primary data 112 including its associated metadata. The secondary storage computing devices 106 and the secondary storage devices 108 may be referred to as secondary storage subsystem 118. [151] include metadata such as a list of the data objects (e.g., files/subdirectories, database objects, mailbox objects, etc.), a logical path to the secondary copy 116 on the corresponding secondary storage device 108, location information (e.g., offsets) indicating where the data objects are stored in the secondary storage device 108, when the data objects were created or modified, etc.[110] storage manager 140. Control information can generally include parameters and instructions for carrying out information management operations, such as, without limitation, instructions to perform a task associated with an operation, timing information specifying when to initiate a task, data path information specifying what components to communicate with or access in carrying out an operation, and the like. In other embodiments, some information management operations are controlled or initiated by other components of system [178] Each pointer points to a respective stored data block, so that collectively, the set of pointers reflect the storage location and state of the data object (e.g., file(s) or volume(s) or data set(s)) at the point in time when the snapshot copy was created. [FIG.1C & 1E] shows corresponding visual) corresponding to target data block operation information whose operation status is operation failure in the data operation table; (San [0006] Upon detecting a target-VM failure and confirming the failure with the VM's host server and/or VM data center controller to ensure that the VM is really in a failed state that requires failover, the illustrative worker monitor node notifies the master monitor node, which in turn carries out its responsibility for notifying a storage manager of this and any other failed VMs in the system. The storage manager not only invokes and manages failover operations for the failed target VM(s) after receiving proper notice from the master monitor node, but also manages other storage management operations throughout the data storage management system, such as backups, replication, archiving, content indexing, restores, etc. Likewise, the storage manager manages failback operations from a site that was previously considered to be a failover destination back to the former source site, e.g., after the source data center recovers, after a failed over VM recovers, etc. [346] A representation of the entire state of the heartbeat monitoring application is stored in the/Master FS-node 708. This is useful in case of current master's failure. After election of a new master, the new master monitor node gets the whole state of the heartbeat monitoring application by querying the heartbeat monitoring distributed file system 545 so that the information can be reliably and rapidly recovered and available for use by the new master monitor node. [371-379] elaborate on the matter [FIG.1 E & 12] show the corresponding system) generating a data re-operation request based on the target physical address information, and sending the data re-operation request to a data node corresponding to the target data block operation information; receiving an execution result of the data node executing the data re-operation request, (San [0256] The target media agent 144A receives the data-agent-processed data from client computing device 102, and at step 4 generates and conveys backup copy 116A to disk library 108A to be stored as backup copy 116A, again at the direction of storage manager 140 and according to backup copy rule set 160. Media agent 144A can also update its index 153 to include data and/or metadata related to backup copy 116A, such as information indicating where the backup copy 116A resides on disk library 108A, where the email copy resides, where the file system copy resides, data and metadata for cache retrieval, etc. Storage manager 140 may similarly update its index 150 to include information relating to the secondary copy operation, such as information relating to the type of operation, a physical location associated with one or more copies created by the operation, the time the operation was performed, status information relating to the operation, the components involved in the operation, and the like. In some cases, storage manager 140 may update its index 150 to include some or all of the information stored in index 153 of media agent 144A. At this point, the backup job may be considered complete. After the 30-day retention period expires, storage manager 140 instructs media agent 144A to delete backup copy 116A from disk library 108A and indexes 150 and/or 153 are updated accordingly.[0275] As an example, data structures 180 illustrated in FIG. 1H may have been created as a result of separate secondary copy operations involving two client computing devices 102. For example, a first secondary copy operation on a first client computing device 102 could result in the creation of the first chunk folder 184, and a second secondary copy operation on a second client computing device 102 could result in the creation of the second chunk folder 185. Container files 190/191 in the first chunk folder 184 would contain the blocks of SI data of the first client computing device 102. If the two client computing devices 102 have substantially similar data, the second secondary copy operation on the data of the second client computing device 102 would result in media agent [482 querying again... [FIG.1E in conjunction with FIG.2A] show generating a data re-operation request based on the target physical address information, and sending the data re-operation request to a data node corresponding to the target data block operation information receiving an execution result of the data node executing the data re-operation request and updating the data operation table based on the execution result.) and updating the data operation table based on the execution result. (San [0210] master storage manager 140 may also track status by receiving periodic status updates from the storage managers 140 (or other components) in the respective cells regarding jobs, system components, system resources, and other items. In some embodiments, a master storage manager 140 may store status information and other information regarding its associated storage operation cells and other system information in its management database 146 and/or index 150 (or in another location)[0256] The target media agent 144A receives the data-agent-processed data from client computing device 102, and at step 4 generates and conveys backup copy 116A to disk library 108A to be stored as backup copy 116A, again at the direction of storage manager 140 and according to backup copy rule set 160. Media agent 144A can also update its index 153 to include data and/or metadata related to backup copy 116A, such as information indicating where the backup copy 116A resides on disk library 108A, where the email copy resides, where the file system copy resides, data and metadata for cache retrieval, etc. Storage manager 140 may similarly update its index 150 to include information relating to the secondary copy operation, such as information relating to the type of operation, a physical location associated with one or more copies created by the operation, the time the operation was performed, status information relating to the operation, the components involved in the operation, and the like. In some cases, storage manager 140 may update its index 150 to include some or all of the information stored in index 153 of media agent 144A. At this point, the backup job may be considered complete. After the 30-day retention period expires, storage manager 140 instructs media agent 144A to delete backup copy 116A from disk library 108A and indexes 150 and/or 153 are updated accordingly. [452] elaborates on the matter [FIG.1E in conjunction with FIG.2A] show generating a data re-operation request based on the target physical address information, and sending the data re-operation request to a data node corresponding to the target data block operation information receiving an execution result of the data node executing the data re-operation request and updating the data operation table based on the execution result.)
Regarding claim 12, Ellis teaches A data processing system, comprising: a server, configured to receive a tag operation instruction for a data file, (Ellis [0014] In operation, the example local policy resolution manager 104 determines whether a policy update trigger has occurred and, if so, the example smart building gateway 102 manages the policy trigger. For example, in the event a user desires to change the manner in which personal information from one or more sensors or actuators is handled, then the example smart building gateway 102 provides a user interface to allow one or more tags associated with the one or more sensors/actuators to be configured, [0016] As described above, each device is associated with tag parameters to define how the device is to operate and access privileges afforded to different entities requesting information or control attempts of the device. The example tag manager 106 generates and manages a household tag table 200, as shown in FIG. 2...[0028] a request from the user access device 124 (or a computing device providing valid logon credentials) to update and/or modify one or more tag parameters 202 associated with one or more IoT devices. In response to such a request, the example tag manager enables a user interface (e.g., a web-based GUI provided by the local user access device interface 122) to facilitate user input/interaction with the example household tag table [51-56] elaborates on receiving a tag operation instruction for a data file, [FIG.1] shows overall system which receives tag operation instruction for a data file) and store tag operation information associated with the tag operation instruction in an operation log; (Ellis [0011] a local storage 126 that contains, in part, a metadata database 128, which may store household tag information (e.g., spatial context information related to sensor(s), manufacture information related to sensor(s), tag values, etc.), and an end node database 130, which may store sensor data (e.g., sensor information, actuator acknowledgement/state information, etc.). The example system 100 also includes a local broker interface 132 communicatively connected to a cloud server 134 (sometimes referred to herein as a remote cloud server) via a network 136, such as the Internet. [0028]one or more tag parameters 202 are stored in the example metadata database 128 of the local storage 126, and the example local policy resolution manager 104 determines whether such changes require propagation of modified tag parameters 202 to the example cloud server [36-38] further elaborates on storing tag operation information [FIG.1] shows storing tag operation information associated with the tag operation instruction in an operation log) update a data operation table associated with the data file according to the tag operation information recorded in the operation log in a case where the operation log meets a log information processing condition; (Ellis [0026] if the group parameters include a tag publish policy “Global,” then the example policy resolution manager 104 transmits the group parameters via the example local broker interface 132 to the example cloud server 134 so that the example cloud storage 144 can update the cloud tag data storage 146 associated with the newly added IoT device(s). if the group parameters include a tag publish policy “Global,” then the example policy resolution manager 104 transmits the group parameters via the example local broker interface 132 to the example cloud server 134 so that the example cloud storage 144 can update the cloud tag data storage 146 associated with the newly added IoT device(s). [0027] When the example local policy resolution manager 104 detects that a third party transmits a notification related to a service opportunity and/or change that is targeted and/or otherwise associated with an IoT device for a user, the example tag manager 106 queries the example tag incentive notification 224 parameter. If the example tag incentive notification 224 parameter reflects a permissive value (e.g., “Yes,” “True,” “1,” etc.), then the notification sent by the third party is allowed to be forwarded to the user access device 124 for consideration. If the user agrees to the incentive, then the example tag manager 106 updates corresponding tag parameters 202 to reflect a desire to participate.[0028] a request from the user access device 124 (or a computing device providing valid logon credentials) to update and/or modify one or more tag parameters 202 associated with one or more IoT devices. In response to such a request, the example tag manager enables a user interface (e.g., a web-based GUI provided by the local user access device interface 122) to facilitate user input/interaction with the example household tag table 200, and the example local policy resolution manager 104 enables viewing and/or modification rights for only those IoT device(s) for which an authorized user is associated [29-30&49-51] provide further details on updating a data operation table associated with the data file according to the tag operation information recorded in the operation log in a case where the operation log meets a log information processing condition [FIG.1] shows the system capable of updating the data operation table associated with the data file) and generate status update information based on the tagging result and send the status update information to the server; (Ellis [0026] if the group parameters include a tag publish policy “Global,” then the example policy resolution manager 104 transmits the group parameters via the example local broker interface 132 to the example cloud server 134 so that the example cloud storage 144 can update the cloud tag data storage 146 associated with the newly added IoT device(s). if the group parameters include a tag publish policy “Global,” then the example policy resolution manager 104 transmits the group parameters via the example local broker interface 132 to the example cloud server 134 so that the example cloud storage 144 can update the cloud tag data storage 146 associated with the newly added IoT device(s). [0027] When the example local policy resolution manager 104 detects that a third party transmits a notification related to a service opportunity and/or change that is targeted and/or otherwise associated with an IoT device for a user, the example tag manager 106 queries the example tag incentive notification 224 parameter. If the example tag incentive notification 224 parameter reflects a permissive value (e.g., “Yes,” “True,” “1,” etc.), then the notification sent by the third party is allowed to be forwarded to the user access device 124 for consideration. If the user agrees to the incentive, then the example tag manager 106 updates corresponding tag parameters 202 to reflect a desire to participate.[0028] a request from the user access device 124 (or a computing device providing valid logon credentials) to update and/or modify one or more tag parameters 202 associated with one or more IoT devices. In response to such a request, the example tag manager enables a user interface (e.g., a web-based GUI provided by the local user access device interface 122) to facilitate user input/interaction with the example household tag table 200, and the example local policy resolution manager 104 enables viewing and/or modification rights for only those IoT device(s) for which an authorized user is associated [29-30&49-51] provide further details on updating a data operation table associated with the data file according to the tag operation information recorded in the operation log in a case where the operation log meets a log information processing condition [FIG.1] shows the system capable of updating the data operation table associated with the data file) wherein the server is further configured to receive the operation status information and record the operation status information into the updated data operation table. (Ellis [0026] cloud server 134 so that the example cloud storage 144 can update the cloud tag data storage 146 associated with the newly added [0028] In some examples, the local user access device interface 122 receives a request from the user access device 124 (or a computing device providing valid logon credentials) to update and/or modify one or more tag parameters 202 associated with one or more IoT devices. In response to such a request, the example tag manager enables a user interface (e.g., a web-based GUI provided by the local user access device interface 122) to facilitate user input/interaction with the example household tag table 200, and the example local policy resolution manager 104 enables viewing and/or modification rights for only those IoT device(s) for which an authorized user is associated. Changes to one or more tag parameters 202 are stored in the example metadata database 128 of the local storage 126, and the example local policy resolution manager 104 determines whether such changes require propagation of modified tag parameters 202 to the example cloud server 134. [0030] updates the example cloud storage 144 in the cloud server 134 with the tag parameters 202 associated with the IoT device so that any query and/or control attempt by an authorized user access device 124 will be recognized by the cloud server [39-40] elaborate on the matter [FIG.1] shows overall visual) Ellis lacks explicitly and orderly teaching generate a data operation request corresponding to a target data node based on an updated data operation table, and send the data operation request to the target data node; the target data node, configured to tag a data interval to be operated in a local storage space based on the data operation request, However San teaches generate a data operation request corresponding to a target data node based on an updated data operation table, and send the data operation request to the target data node; (San [0011] The heartbeat monitor nodes communicate with each other by updating certain specially-configured data files that reside within a distributed file system having an instance on each heartbeat monitor node. The illustrative data files are specially configured to comprise information needed for managing heartbeat monitoring and for communicating information among monitor nodes, e.g., each worker node's current list of target VMs, indications of failed target VMs, network and addressing information for the target VMs, etc. The updated data files are promulgated to all heartbeat monitor nodes by the distributed file system. Thanks to so-called “watch” processes, changes received in the updated data files are detected by each heartbeat monitor node, thus serving as a way of communicating information among heartbeat monitor nodes. Specially configured watch processes detect whether quorum member nodes have failed, whether any worker monitor nodes have failed, as well as detecting other important changes in the system.[0333] The information in the worker-to-VM mapping 6130 that results from executing VM distribution logic 608 at the master node is then distributed to the respective worker nodes using the illustrative VM heartbeat monitoring distributed file system 545, e.g., using data file 712. When changes to worker-to-VM mapping 6130 occur, e.g., due to a failover operation and/or changes in master/worker/observer node roles, the changes are likewise distributed using the illustrative VM heartbeat monitoring distributed file system 545 and changes are detected using the watch processes implemented therein. See also FIGS. 7, 8, 9. [359] node has to send data to one or more worker monitor nodes it updates the “From” field in data structure 802 to “master” and makes other suitable changes in data file 712, e.g., a change in All_VM list for a given worker. Worker nodes “receive” this message by detecting changes to data file 712 via watch processes, e.g., 923, and then will process the updated content of data file 712. Likewise, if a worker monitor node has to send data to the master monitor node, the worker updates the “From” field in data structure 802 to “worker ID” and makes other suitable changes in data file 712, e.g., updating its Failed_VM list to indicate that certain of its target VMs are confirmed failed. Master “receives” this message by detecting changes to data file 712 via watch processes, e.g., 922, and then will process the updated content as appropriate, e.g., notifying storage manager 340 to call failover for the target VMs confirmed failed. Thus, updates to data files 712 provide node-to-node communications. [475-479] elaborate on the matter [FIG.1E & 7] show the system which can generate a data operation request corresponding to a target node) the target data node, configured to tag a data interval to be operated in a local storage space based on the data operation request, (San [0210] For example, an information management policy may specify certain requirements (e.g., that a storage device should maintain a certain amount of free space, that secondary copies should occur at a particular interval, that data should be aged and migrated to other storage after a particular period, that data on a secondary volume should always have a certain level [0456] At block 2026, which is a decision point, the Packet Analyzing Logic determines whether one or more (as many as ten, illustratively) responsive echo reply packets were received from a given target VM within a predefined timeout interval, e.g., five seconds. [457] each followed by the pre-defined timeout interval (e.g., five seconds). Thus, control passes back to block 2022. [482] a predefined timeout interval [484-487] about on having data within a specific interval [FIG.1E] shows corresponding visual on the matter) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of San order to create a more efficient system for node communication and updates (San [0003] Businesses recognize the commercial value of their data and seek reliable, cost-effective ways to protect the information stored on their computer networks while minimizing impact on productivity. A company might back up critical computing systems such as databases, file servers, web servers, virtual machines, and so on as part of a daily, weekly, or monthly maintenance schedule. The company may similarly protect computing systems used by its employees, such as those used by an accounting department, marketing department, engineering department, and so forth. Given the rapidly expanding volume of data under management, companies also continue to seek innovative techniques for managing data growth, for example by migrating data to lower-cost storage over time, reducing redundant data, pruning lower priority data, etc. Enterprises also increasingly view their stored data as a valuable asset and look for solutions that leverage their data. For instance, data analysis capabilities, information management, improved data presentation and access features, and the like, are in increasing demand. [0007] The illustrative VM heartbeat monitoring system comprises an illustrative ping monitoring logic that worker monitor nodes use for determining whether their target VMs are operational. To optimize operational efficiency, a master monitor node can be configured to also operate as a worker monitor node, thus performing a dual role. To further optimize operational efficiency, the VMs targeted for heartbeat monitoring are assigned (distributed) to available worker nodes based on an illustrative VM distribution logic that favors monitor nodes which are “close to” the target VMs from a network topology perspective, e.g., same-network, same-server, low hop count, low round-trip latency, etc. To further optimize operational efficiency [0047] With the increasing importance of protecting and leveraging data, organizations simply cannot risk losing critical data. Moreover, runaway data growth and other modern realities make protecting and managing data increasingly difficult. There is therefore a need for efficient, powerful, and user-friendly solutions for protecting and managing data and for smart and efficient management of data storage.)
Regarding claim 14, Ellis and San teach A non-transitory computer readable storage medium, storing computer executable instructions which, when executed by a processor, implement steps of the data processing method according to claim 1 (Ellis [FIG.9] shows storing computer executable instructions which, when executed by a processor, implement steps of the data processing method according to claim 1 [0038] FIGS. 1 and 2 may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware...FIGS. 1 and 2 is/are hereby expressly defined to include a tangible computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. storing the software and/or firmware. [40-41] elaborate on the matter)
Claims 7 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over US 20170093915 A1; Ellis; Keith A. et al. (hereinafter Ellis) in view of US 20180095855 A1; SANAKKAYALA; Santhosh et al. (hereinafter San) and US 20230401203 A1; OBEIDI; YAZAN et al. (hereinafter Obeidi).
Regarding claim 7, Ellis and San teach The method according to claim 6, further comprising: receiving a data read request of the data file; querying the data operation table based on the data read request; (Ellis [0010] In still other examples, for households that adopt one or more IoT solutions (e.g., IoT light switches, IoT door sensors, IoT security systems, etc.), query and/or actuation activities are typically routed through a cloud-based communication infrastructure. For example, in the event a household user attempts to open or close window shades with an IoT shade controller using a wireless device (e.g., a wireless telephone), then an actuation command is sent from the wireless device to either a home area network (HAN) or from a data service associated with the wireless device (e.g., via a cell-based data connection). [0015] If one or more policy update triggers does not occur, the example cloud server 134 or the example local policy resolution manager 104 identifies whether one or more runtime triggers occur. For example, in the event a device satisfies one or more operating thresholds (e.g., a temperature threshold, a door sensor change, a motion detector trigger, etc.), or if a device is queried (e.g., a local query for a sensor, a remote query for a sensor, a local switch activation, a remote switch activation, a etc.), then the example system 100 determines whether publication of the device is to be shared (e.g., shared with a utility company) or whether the device may be queried or controlled by the user access device 124. Such decisions are based on tag parameters associated with the device that has satisfied an operating threshold, is queried, or is requested to be controlled. [0022] detecting that a new IoT device is added to the system 100, thereby reducing an amount of time required to configure the IoT device. Additionally, the example tag manager 106 may populate the tag device type 218 parameter based on available device information stored on and/or otherwise available when a new IoT device is added to the system 100. In the illustrated example of FIG. 2, the tag device type 218 parameter for Device 1 is “Magnet Sensor” to reflect a device type. In some examples, the edge node interface 112 detects insertion or addition of an IoT device and extracts available model number and/or serial number information associated with the IoT device, and invokes a network (e.g., Internet) query to the network 136 via the example local broker interface 132 (e.g., via a keyword web search using the model/serial number). In the event additional information from the network 136 is received based on provided model [0031] a query and/or control attempt of one or more IoT devices while in the vicinity of the HAN 120. Still further, runtime triggers may be detected by the example cloud user access device interface 142 in response to the user access device 124 invoking a query and/or control attempt of one or more IoT devices while outside the vicinity of the HAN 120, such as when the example user access device 124 is communicatively connected to the remote network 143, a wide area network or a local area network (e.g., a coffee shop WiFi hotspot, a cell-based network associated with the user access device 124, etc.) [0035] receive data from one or more IoT devices... then the query from the user access device 124 may employ a cell-based network communication system to send the query request to the cloud server, and the resulting data from the query is returned to the user access device [35-36] elaborate on the matter) The combination lack explicitly and orderly teaching determining a feedback result corresponding to the data read request according to a query result. However Obeidi teaches determining a feedback result corresponding to the data read request according to a query result. (Obeidi [0013] queries that utilizes an explainable interpretation feedback model. The following described exemplary embodiments provide a system, method, and program product to, among other things, receive a natural language query, automatically detect whether the received natural language query includes an implicit intent by using a reasoning engine, wherein the reasoning engine including domain-agnostic algorithms having domain-agnostic reasoning axioms, generate a modified query including a default inference to provide to the user, and iteratively obtaining user feedback and modifying the natural language query until approved by the user to generate a final output to the received natural language query. The present embodiment has the capacity to improve natural language processing technology by allowing users to provide feedback based on the system's explainable interpretation of the received natural language query. The present embodiment has the capacity to further improve natural language processing technology by providing an improved natural language processing system that is domain-agnostic, allowing a user to bring their own data setup to engage with the natural language processing system.[0015] Many users input natural language queries into question-answering (QA) systems to obtain information. Typically, natural language QA systems are domain-specific regarding what data they access and interact with. Thus, many natural language QA systems are manually customized to allow its users to input queries to obtain information in a specific domain (e.g. Finance, healthcare, etc.) Some natural language QA systems are designed to obtain feedback from users to help improve the natural language QA system. However, feedback in conventional natural language QA systems is typically related to an output or answer obtained by the natural language QA system and not an explanation of how the system interpreted a received natural language query.[0031] query including the alternative inference to the user if the modified query was rejected. Finally, natural language processing program 110A, 110B may automatically store information obtained from the feedback in a fact history repository. In turn, the received natural language query has been processed by a system that is domain-agnostic, and can obtain user feedback related to explainability of the system's interpretation of a received natural language query, thereby allowing the system to both self-improve how it interprets queries over time and be used with a variety of domains without the need for human-intensive manual intervention or training [37-41] further elaborate on the matter [FIG.1] shows a corresponding visual) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of Obeidi's feedback models in order to improve the overall system output (Obeidi [0013] queries that utilizes an explainable interpretation feedback model. The following described exemplary embodiments provide a system, method, and program product to, among other things, receive a natural language query, automatically detect whether the received natural language query includes an implicit intent by using a reasoning engine, wherein the reasoning engine including domain-agnostic algorithms having domain-agnostic reasoning axioms, generate a modified query including a default inference to provide to the user, and iteratively obtaining user feedback and modifying the natural language query until approved by the user to generate a final output to the received natural language query. The present embodiment has the capacity to improve natural language processing technology by allowing users to provide feedback based on the system's explainable interpretation of the received natural language query. The present embodiment has the capacity to further improve natural language processing technology by providing an improved natural language processing system that is domain-agnostic, allowing a user to bring their own data setup to engage with the natural language processing system.[0015] Many users input natural language queries into question-answering (QA) systems to obtain information. Typically, natural language QA systems are domain-specific regarding what data they access and interact with. Thus, many natural language QA systems are manually customized to allow its users to input queries to obtain information in a specific domain (e.g. Finance, healthcare, etc.) Some natural language QA systems are designed to obtain feedback from users to help improve the natural language QA system. However, feedback in conventional natural language QA systems is typically related to an output or answer obtained by the natural language QA system and not an explanation of how the system interpreted a received natural language query.[0031] query including the alternative inference to the user if the modified query was rejected. Finally, natural language processing program 110A, 110B may automatically store information obtained from the feedback in a fact history repository. In turn, the received natural language query has been processed by a system that is domain-agnostic, and can obtain user feedback related to explainability of the system's interpretation of a received natural language query, thereby allowing the system to both self-improve how it interprets queries over time and be used with a variety of domains without the need for human-intensive manual intervention or training [37-41] further elaborate on the matter [FIG.1] shows a corresponding visual)
Corresponding product claim 20 is rejected similarly as claim 7 above.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over US 20170093915 A1; Ellis; Keith A. et al. (hereinafter Ellis) in view of US 20180095855 A1; SANAKKAYALA; Santhosh et al. (hereinafter San) and US 20220232592 A1; DIMOU; Konstantinos et al. (hereinafter Dimou)
Regarding claim 10, Ellis and San teach The method according to claim 9, further comprising: receiving a target data read request; sending the target data read request to the first data node; (San [0324] Master node selection logic 606 is a functional component of VM heartbeat monitoring framework 600 that comprises functionality for establishing a master monitor node within quorum 440 in collaboration with storage manager 340. See also FIG. 16. [0331] Validate operation 6106 receives target VM list 6102 and/or workers list 6104 as inputs. Validate operation 6106 is executed by VM distribution logic 608 to determine whether the VMs as identified in the administered lists 6102 and 6104 are actually operational in system 300 according to respective hypervisor(s) 520 in one or more VM host/servers such as 401, 402, etc. Typically, a hypervisor 520 is queried by VM distribution logic 608 according to techniques well known in the art and in response reports on active VMs executing over the said hypervisor 520. After comparing the administered lists 6102 and 6104 against the reports received from hypervisor(s) 520, VM distribution logic 608 generates a respective validated target-VM list 6120 and a validated worker-VM list 6121. [361] changes will cause remedial action to be taken, e.g., re-targeting ping monitoring to other target VMs, finding a new master monitor node, etc.[0397] At block 1504, monitor nodes designated to be in quorum 440 execute master node selection logic 606, designating an initial or new master monitor node, worker nodes(s), and observer node(s). See also FIG. 16. [401-407] elaborate on the matter [FIG.1E & 16] show corresponding visual) wherein the second target data is the same as first target data stored in the first data node. (San [181] a mirror copy, for instance, where changes made to primary data 112 are mirrored or substantially immediately copied to another location (e.g., to secondary storage device(s) 108). [189] copy may include primary data 112 or a secondary copy 116 that exceeds a given size threshold or a given age threshold. Often, and unlike some types of archive copies [193] Auxiliary copies provide additional standby copies of data and may reside on different secondary storage devices 108 than the initial secondary copies 116. Thus, auxiliary copies can be used for recovery purposes if initial secondary copies 116 become unavailable. Exemplary auxiliary copy techniques are described in further [210-218] elaborate on the matter [FIG.1C &E] shows corresponding visual) The combination lack explicitly and orderly teaching receiving second target data feedback from the second data node in response to the target data read request in a case where the first data node is unavailable, However Dimou teaches receiving second target data feedback from the second data node in response to the target data read request in a case where the first data node is unavailable (Dimou [0119] As shown, the apparatus 1302 may include a variety of components configured for various functions. In one configuration, the apparatus 1302, and in particular the cellular baseband processor 1304, includes means for receiving, from a base station, DCI including a request for HARQ-ACK feedback associated with at least one resource that is unavailable, the HARQ-ACK feedback including one or more one-shot HARQ-ACK codebooks including one or more bits; and means for transmitting, to the base station, the HARQ-ACK feedback associated with the at least one resource that is unavailable, the HARQ-ACK feedback including the one or more one-shot HARQ-ACK codebooks including the one or more bits. The apparatus 1302 further includes means for receiving, from the base station, an indication of the switch of the slot format from the first slot format to the second slot format. The apparatus 1302 further includes means for detecting that the at least one resource for the HARQ-ACK feedback is unavailable. [0123] The reception component 1430 is configured, e.g., as described in connection with 1104 and 1218, to receive, from the UE, the HARQ-ACK feedback associated with the at least one resource that is unavailable—the HARQ-ACK feedback includes the one or more one-shot HARQ-ACK codebooks including the one or more bits. The transmission component 1434 is configured, e.g., as described in connection with 1102, 1206, and 1208, to transmit, to a UE, DCI including a request for HARQ-ACK feedback associated with the at least one resource that is unavailable—the HARQ-ACK feedback is based on one or more one-shot HARQ-ACK codebooks including one or more bits; and to transmit, to a UE, an indication of the switch of the slot format from the first slot format to the second slot format. [139-142] elaborate on the matter) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of Dimou in order to facilitate efficient communication between devices (Dimou [AB.] This disclosure provides systems, devices, apparatus, and methods, including computer programs encoded on storage media, for DCI content and modified/enhanced codebook Type 3 HARQ PUCCH content for deferred SPS PUCCH ACK/NACK. In aspects, a UE may receive, from a base station, DCI including a request for HARQ-ACK feedback associated with at least one resource that is unavailable. [0065] facilitate communications with wireless devices are adopted in various telecommunication standards. For example, communication methods associated with eMBB, mMTC, and URLLC may be incorporated in the 5G NR telecommunication standard, while other aspects may be incorporated in the 4G LTE standard. As mobile broadband technologies are part of a continuous evolution, further improvements in mobile broadband remain useful to continue the progression of such technologies.[0080] In order to reduce layer 1 (L1) signaling overhead, the enhanced Type 3 codebook may correspond to a number N of SPS HARQ IDs for SPS HARQ occasions. The number N may be indicative of the codebook size. The base station may provide an explicit indication in the DCI 604 for requesting the first N SPS PUCCH HARQ-ACK. The base station may also indicate a location of the PUCCH resources. Since the transmission may be limited to SPS PUCCH HARQ-ACK, a reduction in payload size may be provided in comparison to other types of codebook transmissions. The first N SPS HARQ IDs after a time instance to 610 may be indicated by the DCI 604 for requesting feedback based on the enhanced Type 3 codebook. Feedback for the number N of SPS HARQ occasions may be requested to ensure that the UE and the base station have a same understanding of the codebook size.)
Allowable subject matter
Claim 8 contains allowable subject matter, however the claim is objected to since it is dependent on rejected claims 1,6, and 7 above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARYAN D TOUGHIRY whose telephone number is (571)272-5212. The examiner can normally be reached Monday - Friday, 9 am - 5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aleksandr Kerzhner can be reached at (571) 270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ARYAN D TOUGHIRY/Examiner, Art Unit 2165
/ALEKSANDR KERZHNER/Supervisory Patent Examiner, Art Unit 2165