DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 5 objected to because of the following informalities: the limitation “wherein at least a portion of the input data from a news outlet” appears to be missing the word “comes” before “from a news outlet,” as recited in the similar limitation of claim 11. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation "the directed computational graph service module" in “receive batch processing data from the directed computational graph service module” and “receive real-time processing data from the directed computational graph service module.” There is insufficient antecedent basis for this limitation in the claim as the feature “the directed computational graph service module” is not explicitly mentioned or described earlier in the claim.
Claim 1 further recites the limitation "the pipeline service" in “wherein the pipeline service moves graph-based data among a plurality of instantiated workers.” There is insufficient antecedent basis for this limitation in the claim as it is unclear whether “the pipeline service” is referring to “a data pipeline service” as mentioned earlier in claim 1 or a different entity. For examination purposes, either interpretation will be considered.
Claim 1 further recites the limitation "the directed compute graph" in “wherein the nodes, edges, and instantiated workers of the directed compute graph represent state information for processing of the input data.” There is insufficient antecedent basis for this limitation in the claim as it is unclear whether “the directed compute graph” is referring to “a distributed compute graph” as mentioned earlier in claim 1, “a directed graph” as mentioned earlier in claim 1, or a different entity. For examination purposes, either interpretation will be considered.
Claim 1 further recites the limitation "a memory, a processor, and a plurality of programming instructions stored in the memory thereof and operable on the processor thereof" in “a general transformer service module comprising a memory, a processor, and a plurality of programming instructions stored in the memory thereof and operable on the processor thereof” and “a decomposable transformer service module comprising a memory, a processor, and a plurality of programming instructions stored in the memory thereof and operable on the processor thereof.” There is insufficient antecedent basis for this limitation in the claim as it is unclear whether “a memory, a processor, and a plurality of programming instructions stored in the memory thereof and operable on the processor thereof” is referring to “a memory, a processor, and a plurality of programming instructions stored in the memory thereof and operable on the processor thereof” from the “decomposable transformer service module” as mentioned earlier in claim 1, “a memory, a processor, and a plurality of programming instructions stored in the memory thereof and operable on the processor thereof” from the “general transformer service module” as mentioned earlier in claim 1, “a memory, a processor…” from the “computing device” as mentioned earlier in claim 1, or a different entity. For examination purposes, any of the above interpretations will be considered.
Claims 2-3 and 5-6 are dependent on claim 1 and for examination purposes are therefore rejected using the same rationale set forth above in the rejection of claim 1.
Claim 4 recites the limitation "the data input into the system" in “wherein at least a portion of the data input into the system comes from actions of a user while using an application.” There is insufficient antecedent basis for this limitation in the claim as it is unclear whether “the data input into the system” is referring to “the input data” of claim 1 or a different entity. For examination purposes, either interpretation will be considered.
Claim 7 recites the limitation "the directed computational graph service module" in “receive batch processing data from the directed computational graph service module” and “receive real-time processing data from the directed computational graph service module.” There is insufficient antecedent basis for this limitation in the claim as the feature “the directed computational graph service module” is not explicitly mentioned or described earlier in the claim.
Claim 7 further recites the limitation "the pipeline service" in “wherein the pipeline service moves graph-based data among a plurality of instantiated workers.” There is insufficient antecedent basis for this limitation in the claim as it is unclear whether “the pipeline service” is referring to “a data pipeline service” as mentioned earlier in claim 7 or a different entity. For examination purposes, either interpretation will be considered.
Claim 7 further recites the limitation "the directed compute graph" in “wherein the nodes, edges, and instantiated workers of the directed compute graph represent state information for processing of the input data.” There is insufficient antecedent basis for this limitation in the claim as it is unclear whether “the directed compute graph” is referring to “a distributed compute graph” as mentioned earlier in claim 7, “a directed graph” as mentioned earlier in claim 1, or a different entity. For examination purposes, either interpretation will be considered.
Claims 8-12 are dependent on claim 7 and for examination purposes are therefore rejected using the same rationale set forth above in the rejection of claim 7.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4 and 7-10 are rejected under 35 U.S.C. 103 as being unpatentable over Pueyo et al. (U.S. Patent No. US 20110154341 A1), hereinafter “Pueyo” in view of Bartlett et al. (U.S. Patent No. US 20160078532 A1), hereinafter “Bartlett” and Bose et al. (U.S. Patent No. US 20160196527 A1), hereinafter “Bose.”
With regards to Claim 1, Pueyo teaches:
A system for multitemporal data analysis, comprising:
a computing device comprising a memory, a processor, and a non-volatile data storage device (Paragraphs 17-18 and 23, “With reference to FIG. 1, an exemplary system for implementing the invention may include a general purpose computer system 100. Components of the computer system 100 may include, but are not limited to, a CPU or central processing unit 102, a system memory 104, and a system bus 120 that couples various system components including the system memory 104 to the processing unit 102… The computer system 100 may include a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by the computer system 100 and includes both volatile and nonvolatile media. For example, computer-readable media may include volatile and nonvolatile computer storage media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data... The task manager library may also include a configurator that extracts data and parameters of the map-reduce application from a configuration file to configure the map-reduce application for execution, a scheduler that determines an execution plan based on input and output data dependencies of mappers and reducers, a launcher that iteratively launches the mappers and reducers according to the execution plan, and a task executor that requests the map-reduce library to invoke execution of mappers and reducers.” Extracting data and parameters from a map-reduce application and determining an execution plan based on dependencies correlates to a system for multitemporal data analysis. The system including a system memory, CPUs and nonvolatile computer storage media correlates to a computing device comprising memory, a processor, and a nonvolatile data storage device);
a general transformer service module comprising a memory, a processor, and a plurality of programming instructions stored in the memory thereof and operable on the processor thereof (Paragraphs 29-30, “In general, a map-reduce application may have a map stage, where part of the input data distributed across mapper servers may be loaded and processed by executable code of a mapper to produce partial results, and a reduce stage, where one or more reducer servers receive and integrate the partial results of data distributed and processed by executable code of mappers to produce final results of data processing by the map-reduce application… The mapper server 226 and the reducer server 230 may each be a computer such as computer system 100 of FIG. 1. The mapper server 202 may include a mapper 228 that has functionality for processing a part of the input data distributed across mapper servers 202 and sending partial results from processing to a reducer server 230 for integration to produce final results for output… Each of these components may alternatively be a processing device such as an integrated circuit or logic circuitry that executes instructions represented as microcode, firmware, program code or other executable instructions that may be stored on a computer-readable storage medium. Those skilled in the art will appreciate that these components may also be implemented within a system-on-a-chip architecture including memory, external interfaces and an operating system.” The mapper servers comprising mappers with executable code correlates to a general transformer service module comprising a plurality of programming instructions stored in the memory. The mapper servers being a processing device and also implemented within a system-on-a-chip architecture including memory correlates to a general transformer service module comprising a memory and processor), wherein the programmable instructions, when operating on the processor, cause the processor to:
receive batch processing data from the directed computational graph service module (Paragraph 29, “In general, a map-reduce application may have a map stage, where part of the input data distributed across mapper servers may be loaded and processed by executable code of a mapper to produce partial results, and a reduce stage, where one or more reducer servers receive and integrate the partial results of data distributed and processed by executable code of mappers to produce final results of data processing by the map-reduce application.” The map-reduce application loading input data distributed across multiple mapper servers to produce partial results correlates to receiving batch processing data from the directed computational graph service module); and
perform batch processing of the batch processing data according to a pre-determined first data processing workflow (Paragraphs 29-31, “In general, a map-reduce application may have a map stage, where part of the input data distributed across mapper servers may be loaded and processed by executable code of a mapper to produce partial results, and a reduce stage, where one or more reducer servers receive and integrate the partial results of data distributed and processed by executable code of mappers to produce final results of data processing by the map-reduce application… The mapper server 202 may include a mapper 228 that has functionality for processing a part of the input data distributed across mapper servers 202 and sending partial results from processing to a reducer server 230 for integration to produce final results for output… Multiple tasks can be specified in a configuration file, and the task management library will execute them all, one after the other, allowing for the usage of the results of one task as input for the next one. Additionally, a task can be specified to be executed concurrently with other tasks in the configuration file, where the data the task uses does not depend on any task which has not yet finished execution. In order for the task manager library to manage chaining and parallelizing execution of tasks of a map-reduce application in a map-reduce framework, tasks and parameters of the map-reduce application need to be specified in the configuration file.” The mappers processing a part of the input data to produce partial results based on the order specified in a configuration file correlates to performing batch processing of the batch processing data according to a pre-determined first data processing workflow); and
a decomposable transformer service module comprising a memory, a processor, and a plurality of programming instructions stored in the memory thereof (Paragraphs 29-30, “In general, a map-reduce application may have a map stage, where part of the input data distributed across mapper servers may be loaded and processed by executable code of a mapper to produce partial results, and a reduce stage, where one or more reducer servers receive and integrate the partial results of data distributed and processed by executable code of mappers to produce final results of data processing by the map-reduce application… The mapper server 226 and the reducer server 230 may each be a computer such as computer system 100 of FIG. 1. The reducer server 230 may include a reducer 232 that has functionality for receiving partial results of processing parts of the input data from one or more mappers 228, and outputting final results of data processing by the map-reduce application. Each mapper and each reducer may be any type of executable software code, including a kernel component, an application program, a linked library, an object with methods, or other type of executable software code… Each of these components may alternatively be a processing device such as an integrated circuit or logic circuitry that executes instructions represented as microcode, firmware, program code or other executable instructions that may be stored on a computer-readable storage medium. Those skilled in the art will appreciate that these components may also be implemented within a system-on-a-chip architecture including memory, external interfaces and an operating system.” The reducer servers comprising reducers with executable code correlates to a decomposable transformer service module comprising a plurality of programming instructions stored in the memory. The reducer servers being a processing device and also implemented within a system-on-a-chip architecture including memory correlates to a decomposable transformer service module comprising a memory and processor), and operable on the processor thereof, wherein the programmable instructions, when operating on the processor, cause the processor to:
receive real-time processing data from the directed computational graph service module (Paragraph 29, “In general, a map-reduce application may have a map stage, where part of the input data distributed across mapper servers may be loaded and processed by executable code of a mapper to produce partial results, and a reduce stage, where one or more reducer servers receive and integrate the partial results of data distributed and processed by executable code of mappers to produce final results of data processing by the map-reduce application.” The reducer servers receiving the partial results of data processed by the mappers, which originates from input data across mapper servers, correlates to receiving real-time processing data from the directed computational graph service module); and
perform real-time processing of the real-time processing data according to a pre-determined second data processing workflow (Paragraphs 29 and 31, “In general, a map-reduce application may have a map stage, where part of the input data distributed across mapper servers may be loaded and processed by executable code of a mapper to produce partial results, and a reduce stage, where one or more reducer servers receive and integrate the partial results of data distributed and processed by executable code of mappers to produce final results of data processing by the map-reduce application... Multiple tasks can be specified in a configuration file, and the task management library will execute them all, one after the other, allowing for the usage of the results of one task as input for the next one. Additionally, a task can be specified to be executed concurrently with other tasks in the configuration file, where the data the task uses does not depend on any task which has not yet finished execution. In order for the task manager library to manage chaining and parallelizing execution of tasks of a map-reduce application in a map-reduce framework, tasks and parameters of the map-reduce application need to be specified in the configuration file. For instance, mapper, reducer and wrapper executable code referenced by their qualified name may be specified in the configuration file. A set of pathnames of input files or folder can be specified for input data of a single task.” The reducer servers integrating the partial results of data processed by the mappers to product final results of data processing correlates to performing real-time processing of the real-time data. Tasks specified to be executed concurrently with other tasks and chained in succession based on a configuration file referencing mappers and reducers separately correlates to the real-time processing being done according to a second predetermined workflow).
Pueyo does not explicitly teach:
a graph stack service module comprising a first plurality of programming instructions stored in the memory and operable on the processor, wherein the first plurality of programming instructions, when operating on the processor, causes the computing device to:
convert an input data stream into a directed graph;
send the directed graph to a distributed compute graph comprising a data pipeline service and configured to perform computations based on the structure and content of the directed graph;
wherein the pipeline service moves graph-based data among a plurality of instantiated workers, wherein the nodes, edges, and instantiated workers of the directed compute graph represent state information for processing of the input data;
However, Bartlett teaches:
a graph stack service module comprising a first plurality of programming instructions stored in the memory and operable on the processor (Paragraphs 17 and 19, “MAG engine 2 may include a MAG memory 6 and MAG baseline 20. MAG engine 2 may also include a processor to execute one or more modules, including execute object module 8, prepare response module 10, trade parser module 12, incoming trade module 14, and look-up engine 16. MAG engine 2 may also include optimizing compiler 24, objects 26, and kernel library 28… MAG memory 6 may, in some cases, further be configured for long-term storage of information as non-volatile memory space and retain information after power on/off cycles. Examples of non-volatile memories include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories.” The MAG engine including a MAG memory and processor to execute one or more modules correlates to a graph stack service module comprising a first plurality of programming instructions stored in the memory and operable on the processor), wherein the first plurality of programming instructions, when operating on the processor, causes the computing device to:
send the directed graph to a distributed compute graph comprising a data pipeline service and configured to perform computations based on the structure and content of the directed graph (Paragraphs 22 and 27, “In certain examples, MAG engine 2 may operate on two types of graphs. The top-level graph is called a hierarchy graph, where a node may, for instances, represent a type of a financial contract, and an edge indicates the existence of some relationship between contracts. There may be one hierarchy graph for each counterparty. From the hierarchy graph, along with information on statistical measures that a client specifies to compute, a directed graph called the computation graph may be derived. There may be one computation graph for each counterparty. The computation graph may be mostly a tree, in which case it is very much like an expression tree. In a computation graph, nodes represent computations and edges represent data dependence between computations. A node may include a computation kernel and its internal data called states. States are typically vectors or dense matrices called sheets. A sheet may comprise a two-dimension data structure organized by scenarios and time points. In one example, sheets may be in memory sequentially along the scenario dimension. There are two types of nodes in a computation graph, consolidation nodes and transformation nodes. Both types of nodes may produce a new result, while only consolidation nodes may modify its own states… Each opcode may be implemented as a computation kernel with a clearly defined set of input and output. As a basic computing block of a trace evaluation, each computation kernel is individually tuned. Opcodes are composed into a sequence to express the computation involved in a trade evaluation. The implementation of an opcode sequence, which may be naturally composed of kernel calls, may be referred to as a pipeline kernel. Under this framework, each node of a computation graph may be an opcode and the computation involved in evaluating a trade can be naturally expressed by a pipeline kernel consisting of computation kernels involved in a post-order traversal of the computation graph.” The hierarchy graph being used to derive a directed computation graph correlates to sending the directed graph to a distributed compute graph. The computation graph including nodes and computation kernels which implement a pipeline kernel correlates to a distributed compute graph comprising a data pipeline service. The compute nodes which compute real-time information based on edges that represent data dependence between computations correlates to a distributed compute graph configured to perform computations based on the structure and content of the directed graph);
wherein the pipeline service moves graph-based data among a plurality of instantiated workers, wherein the nodes, edges, and instantiated workers of the directed compute graph represent state information for processing of the input data (Paragraphs 22, 33, 54, and 88, “In a computation graph, nodes represent computations and edges represent data dependence between computations. A node may include a computation kernel and its internal data called states. States are typically vectors or dense matrices called sheets. A sheet may comprise a two-dimension data structure organized by scenarios and time points. In one example, sheets may be in memory sequentially along the scenario dimension. There are two types of nodes in a computation graph, consolidation nodes and transformation nodes. Both types of nodes may produce a new result, while only consolidation nodes may modify its own states… Techniques of this disclosure may also exploit both single-instruction multiple data (SIMD) parellelism and thread-level parallelism… In some examples, the one or more static computation nodes of the computation graph each contain static information. In some examples, the one or more dynamic computation nodes of the computation graph each comprise dynamic information. The instructions may cause processors 30 to, before receiving the real-time trade, determine a pipeline kernel in the computation graph. The pipeline kernel may comprise at least one of the one or more static computation nodes, at least one of the one or more dynamic computation nodes, and a path originating from one of the one or more static computation nodes or one of the one or more dynamic computation nodes along at least one of the one or more computation edges… For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may be executed in a different order, or the functions in different blocks may be processed in different but parallel processing threads, depending upon the functionality involved.” The computation nodes including states and edges representing data dependence between computations in a computation graph correlates to the nodes and edges of the directed compute graph representing state information for processing of the input data. The pipeline kernel comprising static and dynamic computation nodes containing information and connected by at least one computation edge correlates to the pipeline service moving graph-based data. The system utilizing parallel processing threads to process specific functions of the computation nodes correlates to the pipeline service moving graph-based data among a plurality of instantiated workers and instantiated workers of the directed compute graph representing state information for processing of the input data);
Additionally, Bose teaches:
convert an input data stream into a directed graph (Fig. 15, paragraphs 138-139, “A “virtual sensor” as used herein refers to a modular software component performing specialized signal transformations and inferences (assessments) based on a) internal probabilistic models and b) input data streams. A set of virtual sensors may be dynamically programmed by models produced by learning pipelines and wired together to form a directed acyclic graph of their data dependencies. The virtual sensors may be executed in a distributed and parallel environment. FIG. 15 illustrates a portion of the virtual sensor network, following the path of raw sensor data streams and how they are transformed for real-time predictions. The virtual sensor network is organized into layers so that data is processed and aggregated into increasingly higher levels of abstraction. The path of the raw data stream begins with physical sensors 1502, 1504, and 1506. The virtual sensors within the lowest physical sensor level are a collection of preprocessors that include virtual sensors 1508, 1510, and 1512. Example preprocessors at this level may include, without limitation, sensor failure reconstruction, noise reduction, normalization and/or feature extraction. Following the preprocessors, the data streams are fused by component that include virtual sensors 1514, 1516, 1518, and 1520. In the present example, virtual sensors 1514 is the component state tracker based on particle filter-based algorithms and virtual sensor 1516 is an RSL estimator. Virtual sensor 1518 is an emission predictor and virtual sensor 1520 is an emission alert generator.” The virtual sensors performing specialized signal transformations and inferences based on raw input data streams to form directed acyclic graphs correlates to converting an input data stream into a directed graph);
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Pueyo with convert an input data stream into a directed graph as taught by Bose because virtual sensors provide scalable condition monitoring and assessments which are distributed in-vehicle and in-cloud to perform specialized signal transformations and inferences based on input data streams. The virtual sensors can be dynamically programmed by models produced by learning pipelines to form directed acyclic graphs in a distributed and parallel environment (Bose: paragraph 138).
Additionally, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Pueyo with a graph stack service module comprising a first plurality of programming instructions stored in the memory and operable on the processor, wherein the first plurality of programming instructions, when operating on the processor, causes the computing device to: send the directed graph to a distributed compute graph comprising a data pipeline service and configured to perform computations based on the structure and content of the directed graph; wherein the pipeline service moves graph-based data among a plurality of instantiated workers, wherein the nodes, edges, and instantiated workers of the directed compute graph represent state information for processing of the input data as taught by Bartlett because reducing the amount of computation by implementing computational kernels in the directed graph, improving core efficiency, and using thread parallelism improve the latency and throughput of a software system (Bartlett: paragraph 31).
With regards to Claim 7, the system of Claim 1 performs the same steps as the method of Claim 7, and Claim 7 is therefore rejected using the same rationale set forth above in the rejection of Claim 1.
With regards to Claim 2, Pueyo in view of Bartlett and Bose teaches the system of Claim 1 above. Pueyo further teaches:
wherein a function is executed based at least in part by the results of data processing by the general transformer service module and decomposable transformer service module (Paragraphs 30-31, “The mapper server 202 may include a mapper 228 that has functionality for processing a part of the input data distributed across mapper servers 202 and sending partial results from processing to a reducer server 230 for integration to produce final results for output. The reducer server 230 may include a reducer 232 that has functionality for receiving partial results of processing parts of the input data from one or more mappers 228, and outputting final results of data processing by the map-reduce application… Multiple tasks can be specified in a configuration file, and the task management library will execute them all, one after the other, allowing for the usage of the results of one task as input for the next one.” The mapper servers processing a part of the input data which is sent to the reducer server for further processing correlates to a function executed at least in part by results of the data processing by the general transformer service module. The reducer server outputting final results of data processing by the map-reduce application, which can be used as input for the next task, correlates to a function executed at least in part by results of the data processing by the decomposable transformer service module).
With regards to Claim 8, the system of Claim 2 performs the same steps as the method of Claim 8, and Claim 8 is therefore rejected using the same rationale set forth above in the rejection of Claim 2.
With regards to Claim 3, Pueyo in view of Bartlett and Bose teaches the system of Claim 1 above. Pueyo further teaches:
wherein at least a portion of the input data comes from a social media source (Paragraph 2, “Cloud computing involves many powerful technologies, including map-reduce applications, that allow large online companies to process vast amounts of data in a short period of time. Tasks such as analyzing traffic, extracting knowledge from social media properties or computing new features for a search index are complex by nature and recur on a regular basis. Map-reduce applications are often used to perform these tasks to process large quantities of data. A map-reduce application may be executed in a map-reduce framework of a distributed computer system where input data is divided and loaded for processing by several mappers, each executing on mapper servers, and partial results from processing by mappers are sent for integration to one or more reducers, each executing on reducer servers.” The map-reduce application extracting knowledge from social media properties to process large quantities of input data correlates to at least a portion of the input data coming from a social media source).
With regards to Claim 9, the system of Claim 3 performs the same steps as the method of Claim 9, and Claim 9 is therefore rejected using the same rationale set forth above in the rejection of Claim 3.
With regards to Claim 4, Pueyo in view of Bartlett and Bose teaches the system of Claim 1 above. Pueyo further teaches:
wherein at least a portion of the data input into the system comes from actions of a user while using an application (Paragraphs 21 and 31, “A user may enter commands and information into the computer system 100 through an input device 140 such as a keyboard and pointing device, commonly referred to as mouse, trackball or touch pad tablet, electronic digitizer, or a microphone. Other input devices may include a joystick, game pad, satellite dish, scanner, and so forth. These and other input devices are often connected to CPU 102 through an input interface 130 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB)… Users rarely run a single task in a map-reduce application for a data processing project and need to chain data processes, transforming the data, retrieving results and reusing obtained results. Multiple tasks can be specified in a configuration file, and the task management library will execute them all, one after the other, allowing for the usage of the results of one task as input for the next one.” The users entering commands and information such as a configuration file into the computer system through an input device correlates to at least a portion of the data input into the system coming from actions of a user while using an application).
With regards to Claim 10, the system of Claim 4 performs the same steps as the method of Claim 10, and Claim 10 is therefore rejected using the same rationale set forth above in the rejection of Claim 4.
Claim(s) 5 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Pueyo in view of Bartlett, Bose and Bishop et al. (U.S. Patent No. US 20170083380 A1), hereinafter “Bishop.”
With regards to Claim 5, Pueyo in view of Bartlett and Bose teaches the system of Claim 1 above. Pueyo in view of Bartlett and Bose does not explicitly teach:
wherein at least a portion of the input data from a news outlet.
However, Bishop teaches:
wherein at least a portion of the input data from a news outlet (Paragraph 136, “Data sources 102 are entities such as a smart phone, a WiFi access point, a sensor or sensor network, a mobile application, a web client, a log from a server, a social media site, etc. In one implementation, data from data sources 102 are accessed via an API Application Programming Interface) that allows sensors, devices, gateways, proxies and other kinds of clients to register data sources 102 in the IoT platform 100 so that data can be ingested from them. Data from the data sources 102 can include events in the form of structured data (e.g. user profiles and the interest graph), unstructured text (e.g. tweets) and semi-structured interaction logs. Examples of events include device logs, clicks on links, impressions of recommendations, numbers of logins on a particular client, server logs, user's identities (sometimes referred to as user handles or user IDs and other times the users' actual names), content posted by a user to a respective feed on a social network service, social graph data, metadata including whether comments are posted in reply to a prior posting, events, news articles, and so forth.” The clients registering data sources which include data from news articles correlates to at least a portion of the input data coming from a news outlet).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Pueyo with wherein at least a portion of the input data from a news outlet as taught by Bishop because APIs allow clients to register a variety of data sources so that data can be ingested from them. These data sources can be structured, unstructured, or semi-structured and include events such as device logs, clicks on links, recommendations, number of logins, server logs, identities, and metadata (Bishop: paragraph 136).
With regards to Claim 11, the system of Claim 5 performs the same steps as the method of Claim 11, and Claim 11 is therefore rejected using the same rationale set forth above in the rejection of Claim 5.
Claim(s) 6 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Pueyo in view of Bartlett, Bose and Sirota et al. (U.S. Patent No. US 8719415 B1), hereinafter “Sirota.”
With regards to Claim 6, Pueyo in view of Bartlett and Bose teaches the system of Claim 1 above. Pueyo in view of Bartlett and Bose does not explicitly teach:
wherein at least a portion of the input data comes from a distributed database.
However, Sirota teaches:
wherein at least a portion of the input data comes from a distributed database (Col. 3, lines 34-42, “As previously noted, a cluster for use in the distributed execution of a program may in at least some embodiments include multiple core computing nodes that participate in a distributed storage system for use by the cluster, such as to store input data used in the distributed program execution and/or output data generated by the distributed program execution. The distributed storage system may have various forms in various embodiments, such as a distributed file system, a distributed database, etc.” The input data being stored in a distributed storage system such as a distributed database correlates to a portion of the input data coming from a distributed database).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Pueyo with wherein at least a portion of the input data comes from a distributed database as taught by Sirota because distributed storage systems provide various mechanisms to enhance data availability, such as by storing multiple copies of some groups of data to enhance the likelihood that at least one copy remains available of a core computing node storing another copy of that data group fails or otherwise becomes unavailable (Sirota: Col. 3, lines 42-48).
With regards to Claim 12, the system of Claim 6 performs the same steps as the method of Claim 12, and Claim 12 is therefore rejected using the same rationale set forth above in the rejection of Claim 6.
Prior Art Made of Record
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Amorim et al. (U.S. Patent No. US 20120290576 A1); teaching a method of data analysis from multiple devices using a database service module with a data storage subsystem to collect data from different devices. The data is stored in a meta-structure using primitives to classify the data. An analysis engine is used to analyze the data in a specific order and frequency to determine whether the data meets certain criteria in accordance with a stored set of rules.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SELINA HU whose telephone number is (571)272-5428. The examiner can normally be reached Monday-Friday 8:30-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chat Do can be reached at (571) 272-3721. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SELINA ELISA HU/Examiner, Art Unit 2193
/Chat C Do/Supervisory Patent Examiner, Art Unit 2193