DETAILED ACTION
This correspondence is responsive to the application and preliminary amendment filed on January 24, 2023. Claims 1-27 are pending in the case, with claims 1, 14 and 27 in independent form.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Summary of Detailed Action
Claims 1, 13 and 14 are objected to regarding informalities.
Specification is objected to because the title of the invention is not descriptive.
Claims 4-5, 17-18, 9, 22, 11, 24 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite.
Claim 27 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claims 1, 7, 12, 14, 20, 25, 27 are rejected under 35 U.S.C. 103 as being unpatentable over Tabuchi et al. in view of Jampani et al.
Claims 2-3, 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Tabuchi in view of Jampani, and further in view of Chilimbi et al.
Claims 4,17 are rejected under 35 U.S.C. 103 as being unpatentable over Tabuchi in view of Jampani, and further in view of Cilingir et al.
Claims 6,19 are rejected under 35 U.S.C. 103 as being unpatentable over Tabuchi in view of Jampani, and further in view of Milletari et al.
Claims 10, 13, 23, 26 are rejected under 35 U.S.C. 103 as being unpatentable over Tabuchi in view of Jampani, and further in view of Itou et al.
Claims 5, 8-9, 11, 18, 21-22, 24 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if all objections and rejections as being indefinite are overcome.
Claim Objections
Claims 1, 13 and 14 are objected to because of the following informalities:
Claims 1, 13, 14: Claims should not include bullet symbols. Delete all bullet symbols in claims 1, 13 and 14.
Claims 1, 14: Claims should not include hyphen symbols. Delete all hyphen symbols in claims 1 and 14.
Claims 1, 14: Claims should not include asterisk symbols. Delete all asterisk symbols in claims 1 and 14.
Claims 1, 13, 14: All limitations should end with a semicolon or colon, except for the last limitation that ends with a period. Insert semicolons to the ends of limitations in claims 1, 13 and 14.
As an example, a portion of claim 1 is shown below with semicolons and a colon added to the end of limitations:
1. (Currently Amended) A data processing device, comprising:
at least one first interface for receiving input data;
at least one second interface for outputting output data;
at least one shared memory device into which data can be written and from which data can be read;
at least one computing device to which the at least one first interface and the at least one second interface and the at least one shared memory device are connected, and which is configured to:
Claim 14, lines 17-18: delete the two periods “.” that appear in the middle of the limitation on lines 17 and 18.
“executes a machine learning method on the module-specific data segments, said machine learning method comprising data
Appropriate correction is required.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The following title is suggested: SYSTEM AND METHOD FOR DATA PROCESSING USING SHARED MEMORY AND PARALLEL PROCESSING FOR DATA HUB AND COMPUTATION MODULES
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 4 and 5 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 5 depends from claim 4. Both claims 4 and 5 recite the phrase “keeps information which shared keyed data segments were segmented from the same input data.” The phrase “keeps information which shared keyed data segments were segmented from the same input data” is grammatically incorrect and unclear. For example, it is not clear what “keeps information which shared keyed data segments were segmented from the same input data means, much less how to “keep information which shared keyed data segments were segmented from the same input.” Therefore, the boundaries of the claims 4 and 5 are indefinite. For examination purposes, claims 4 and 5 are interpreted as reciting the phrase “keeps information on which shared keyed data segments were segmented from the same input data.” Applicant may cancel claims 4 and 5 or amend claims 4 and 5 to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 17 and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 18 depends from claim 17. Both claims 17 and 18 recite the phrase “keeps information which shared keyed data segments were segmented from the same input data.” The phrase “keeps information which shared keyed data segments were segmented from the same input data” is grammatically incorrect and unclear. For example, it is not clear what “keeps information which shared keyed data segments were segmented from the same input data means, much less how to “keep information which shared keyed data segments were segmented from the same input.” Therefore, the boundaries of the claims 17 and 18 are indefinite. For examination purposes, claims 17 and 18 are interpreted as reciting the phrase “keeps information on which shared keyed data segments were segmented from the same input data.” Applicant may cancel claims 17 and 18 or amend claims 17 and 18 to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 9 and 22 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 22 recites a method that parallels the device of claim 9. Claim 9 recites “create a sense of orientation in space and/or time by using non-commutating morphisms or functors” and claim 22 recites Claims 9 recites “a sense of orientation in space and/or time is created.” It is not clear what “creating a sense of orientation in space and/or time means, much less how to create a sense of orientation in space and/or time. For example does creating a sense of orientation in space and/or time mean creating a representation, graphic, diagram, image, view, graph of data, connections, objects or events in spatial or time orientations or domains? Or does creating a sense of orientation in space and/or time mean something else entirely. Applicant may cancel claims 17 and 18 or amend claims 17 and 18 to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 11 and 24 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 24 recites a method that parallels the device of claim 11. Claims 11 and 24 both recite the limitation "the random signal generator." There is insufficient antecedent basis for this limitation in the claims 11 and 24.
Claims 11 and 24 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 24 recites a method that parallels the device of claim 11. Claims 11 and 24 both recite the limitation "the universal quantifier of natural logic." There is insufficient antecedent basis for this limitation in the claims 11 and 24.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 27 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because claim 27 recites “a computer program which when the program is executed by a data processing device, causes the data processing device to be configured according to 1.” Claim 1 only recites a computer program and does not include any hardware or non-transitory computer readable media. Accordingly, the recited computer program is software per se and is not a “process”, a “machine”, a “manufacture”, or a “composition of matter,” as defined in 35 U.S.C. 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 7, 12, 14, 20, 25 and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Tabuchi et al. (Pub. No. US 2019/0244129 A1, published August 8, 2019) hereinafter Tabuchi in view of Jampani et al. (Pub. No. US 2020/0320401 A1, filed April 8, 2019) hereinafter Jampani. The Examiner notes that Tabuchi is cited on Applicant’s Information Disclosure Statement filed May 29, 20245.
Regarding claim 1, Tabuchi teaches:
A data processing device (i.e., The data orchestration platform 100 may be realized by software or hardware that automatically and dynamically monitors, controls, and manages devices, computer systems (A data processing device (a data processing computer system 300 device, see also Figure 3, para 57)), middleware, services, and other elements of the network communication environment 150. Here, the data orchestration platform 100 can implement the aspects of the present disclosure using methods such as IoT (Internet of Things) device management, AI data processing, machine learning, big data processing, and the like. Tabuchi, Figs 1-3, 9, para 45, 47, 23, 57, 65), comprising:
at least one first interface for receiving input data (i.e., The major components of the computer system 300 include one or more processors 302, a memory 304, a terminal interface 312, a storage interface 314, an I/O (Input/Output) device interface 316 (at least one first interface for receiving input data), and a network interface 318, all of which are communicatively coupled, directly or indirectly, for inter-component communication via a memory bus 306, an I/O bus 308, bus interface unit 309, and an I/O bus interface unit 310. Tabuchi, Figs 1-3, 9 para 57, 70-72, 62, 57-65.)
at least one second interface for outputting output data (i.e., The major components of the computer system 300 include one or more processors 302, a memory 304, a terminal interface 312, a storage interface 314, an I/O (Input/Output) device interface 316 (at least one second interface for outputting output data), and a network interface 318, all of which are communicatively coupled, directly or indirectly, for inter-component communication via a memory bus 306, an I/O bus 308, bus interface unit 309, and an I/O bus interface unit 310. Tabuchi, Figs 1-3, 9, para 57, 62, 57-65.)
at least one (i.e., The major components of the computer system 300 include one or more processors 302, a memory 304 (at least one ), a terminal interface 312, a storage interface 314, an I/O (Input/Output) device interface 316, and a network interface 318, all of which are communicatively coupled, directly or indirectly, for inter-component communication via a memory bus 306, an I/O bus 308, bus interface unit 309, and an I/O bus interface unit 310. Tabuchi, Figs 1-2, 3, para 57, 59, 63, 57-65. [0063] The storage interface 314 supports the attachment of one or more disk drives or direct access storage devices 322 (which are typically rotating magnetic disk drive storage devices, although they could alternatively be other storage devices, including arrays of disk drives configured to appear as a single large storage device to a host computer, or solid-state drives, such as flash memory). In some embodiments, the storage device 322 may be implemented via any type of secondary storage device (at least one ). The contents of the memory 304, or any portion thereof, may be stored to and retrieved from the storage device 322 as needed (at least one read). The I/O device interface 316 provides an interface to any of various other I/O devices or devices of other types, such as printers or fax machines. The network interface 318 provides one or more communication paths from the computer system 300 to other digital devices and computer systems; these communication paths may include, for example, one or more networks 330. Tabuchi, Figs 1-2, 3, para 63, 57, 59, 63, 57-65.
at least one computing device to which the at least one first interface and the at least one second interface and the at least one (i.e., Turning now to the Figures, FIG. 1 is a conceptual diagram of a network communication environment 150 including a data orchestration platform 100, according to embodiments. The network communication environment 150 may include a data orchestration platform 100 and an information source group 120. [0045] The network communication environment 150 may be a network that facilitates data acquisition, communication, and connection between sensors, devices buildings, automobiles, organisms, software applications, and other entities that utilize the data orchestration platform (at least one computing device to which the at least one first interface and the at least one second interface and the at least one shared memory device are connected (information sources devices connected to computer system first and second I/O interfaces and ). For example, as illustrated in FIG. 1, the network communication environment 150 may include a group of information sources 120 having a plurality of information sources. For example, the information source group 120 may refer to a collection of information sources such as devices, organisms, locations, software, or the like where data or information is generated in a network communication environment. As an example, as illustrated in FIG. 1, the information source group 120 may include factory production management systems, sensors for monitoring traffic flow rates, social networking service (SNS) platforms, external artificial intelligence (AI) databases, sensors for monitoring human biometric data, or various other equipment or systems. Tabuchi, Figs 1-4, 9, para 44-45, 57-59, 62, 57-65.
Thus, Tabuchi teaches the at least one memory device. Tabuchi does not specifically disclose to execute in parallel a plurality of processes and at least one shared memory.
However Jampani teaches in the field related to Systems and methods to detect one or more segments of one or more objects within one or more images based, at least in part, on a neural network trained in an unsupervised manner to infer the one or more segments. Jampani, Abstract. Jampani, which is analogous to the claimed invention because is directed to computer systems, neural networks, processing segments and parallel processing and shared memory, teaches that, FIG. 6 illustrates a parallel processing unit (“PPU”) 600, in accordance with one embodiment. Jampani, Fig 6, 7-10, para 46, 47-48, 58, 54, 74, 84, 94. In an embodiment, cooperating threads can refer to a plurality of threads including instructions to perform the task and that exchange data through shared memory (at least one shared memory). In an embodiment, each task comprises one or more groups of related threads, which may be referred to as a warp. In an embodiment, a warp comprises a plurality of related threads (e.g., 32 threads) that can be executed in parallel. In an embodiment, cooperating threads can refer to a plurality of threads including instructions to perform the task and that exchange data through shared memory (at least one shared memory). Threads and cooperating threads are described in more detail, in accordance with one embodiment, in conjunction with FIG. 8A. Jampani, Fig 6, 7-10, para 58, 46, 47-48, 54, 74, 84, 94.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement the data orchestration platform of Tabuchi using the at least one shared memory of Jampani, with a reasonable expectation of success in order to accelerate deep learning systems and applications. Jampani, para 48. This would have provided the user with the advantages of increased efficiency in the execution of processes and exchanges of data.
- receive input data from the at least one first interface (i.e., At block 420, a set of raw data may be ingested. The set of raw data may be ingested from a set of information sources. Generally, ingesting can include detecting, analyzing, sensing, receiving, collecting, gathering, transforming, importing, or otherwise capturing the set of raw data from the set of information sources. The set of information sources may include devices, people, locations, software, or other points from which data related to the network communication environment is produced. As examples, the set of information sources may include a manufacturing execution system (MES) deployed in a factory environment, a programmable logic controller (PLC) of a server, a human user input, a heart monitor, a camera, a solar panel, a vehicle, or the like. In embodiments, ingesting may include using a plurality of data orchestration devices (e.g., cameras, microphones, thermal cameras, motion sensors, thermometers, photodetectors, barometers, hydrometers, capacitance sensors, accelerometers, and other sensors) to aggregate (e.g., collect, capture) the set of raw data from the network communication environment (e.g., home environment, health care facility, factory, office building, road/highway), and transmit it to the data orchestration platform (receive input data from the at least one first interface (receive information sources input data from the at least one first input I/O interface of the data orchestration computer system 300 that receives information sources input data transmitted to and received by the data orchestration platform computer system 300 I/O interfaces)). Tabuchi, Figs 1-4, 9, para 70, 57, 62, 57-65.)
- send output data to the at least one second interface (i.e., The I/O interface units support communication with a variety of storage and I/O devices. For example, the terminal interface unit 312 supports the attachment of one or more user I/O devices 320, which may include user output devices (such as a video display device, speaker, and/or television set) and user input devices (such as a keyboard, mouse, keypad, touchpad, trackball, buttons, light pen, or other pointing device). A user may manipulate the user input devices using a user interface in order to provide input data and commands to the user I/O device 320 and the computer system 300, and may receive output data via the user output devices. For example, a user interface may be presented via the user I/O device 320, such as displayed on a display device, played via a speaker, or printed via a printer (send output data to the at least one second interface (send output data to the at least one second I/O interface display, speaker, printer). Tabuchi, Figs 1-4, 9, para 62, 57-65.)
- read data from and write data into the at least one wherein the at least one computing device is configured to execute i
Tabuchi teaches that, FIG. 3 depicts a high-level block diagram of a computer system 300 for implementing various embodiments of the present disclosure, according to embodiments. The mechanisms and apparatus of the various embodiments disclosed herein apply equally to any appropriate computing system. The major components of the computer system 300 include one or more processors 302, a memory 304, a terminal interface 312, a storage interface 314, an I/O (Input/Output) device interface 316, and a network interface 318, all of which are communicatively coupled, directly or indirectly, for inter-component communication via a memory bus 306 (read data from and write data into the at least one ), an I/O bus 308, bus interface unit 309, and an I/O bus interface unit 310. [0058] The computer system 300 may contain one or more general-purpose programmable central processing units (CPUs) 302A and 302B, herein generically referred to as the processor 302. In embodiments, the computer system 300 may contain multiple processors; …. Each processor 302 executes instructions stored in the memory 304 and may include one or more levels of on-board cache. Tabuchi, Figs 1-4, 9, para 57-58, 59, 63-64, 57-65. [0064] Although the computer system 300 shown in FIG. 3 illustrates a particular bus structure providing a direct communication path among the processors 302, the memory 304, the bus interface 309, the display system 324, and the I/O bus interface unit 310, in alternative embodiments the computer system 300 may include different buses or communication paths, which may be arranged in any of various forms, such as point-to-point links in hierarchical, star or web configurations, multiple hierarchical buses, parallel and redundant paths, or any other appropriate type of configuration (wherein the at least one computing device is configured to execute )). Furthermore, while the I/O bus interface unit 310 and the I/O bus 308 are shown as single respective units, the computer system 300 may, in fact, contain multiple I/O bus interface units 310 and/or multiple I/O buses 308. While multiple I/O interface units are shown, which separate the I/O bus 308 from various communications paths running to the various I/O devices, in other embodiments, some or all of the I/O devices are connected directly to one or more system I/O buses. Tabuchi, Figs 1-4, 9, para 64, 57-59, 57-65.
Thus, Tabuchi teaches that the at least one computing device is configured to execute a plurality of processors and at least one memory device. Tabuchi does not specifically disclose to execute in parallel a plurality of processes and at least one shared memory.
However Jampani teaches in the field related to Systems and methods to detect one or more segments of one or more objects within one or more images based, at least in part, on a neural network trained in an unsupervised manner to infer the one or more segments. Jampani, Abstract. Jampani, which is analogous to the claimed invention because is directed to computer systems, neural networks, processing segments and parallel processing and shared memory, teaches that, [0046] FIG. 6 illustrates a parallel processing unit (“PPU”) 600, in accordance with one embodiment. In an embodiment, the PPU 600 is configured with machine-readable code that, if executed by the PPU, causes the PPU to perform some or all of processes and techniques described throughout this disclosure. In an embodiment, the PPU 600 is a multi-threaded processor that is implemented on one or more integrated circuit devices and that utilizes multithreading as a latency-hiding technique designed to process computer-readable instructions (also referred to as machine-readable instructions or simply instructions) on multiple threads in parallel (execute in parallel a plurality of processes). In an embodiment, a thread refers to a thread of execution and is an instantiation of a set of instructions configured to be executed by the PPU 600. Jampani, Fig 6, 7-10, para 46, 47-48, 58, 54, 74, 84, 94. In an embodiment, cooperating threads can refer to a plurality of threads including instructions to perform the task and that exchange data through shared memory. In an embodiment, each task comprises one or more groups of related threads, which may be referred to as a warp. In an embodiment, a warp comprises a plurality of related threads (e.g., 32 threads) that can be executed in parallel (execute in parallel a plurality of processes). In an embodiment, cooperating threads can refer to a plurality of threads including instructions to perform the task and that exchange data through shared memory (at least one shared memory). Threads and cooperating threads are described in more detail, in accordance with one embodiment, in conjunction with FIG. 8A. Jampani, Fig 6, 7-10, para 58, 46, 47-48, 54, 74, 84, 94.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement the data orchestration platform of Tabuchi using executing in parallel a plurality of processes and at least one shared memory of Jampani, with a reasonable expectation of success in order to accelerate deep learning systems and applications. Jampani, para 48. This would have provided the user with the advantages of increased efficiency in the execution of multiple processes and exchanges of data.
- at least one data hub process receiving input data from the at least one first interface and/or the at least one
Tabuchi teaches that, Aspects of FIG. 9 relate to a system architecture 900 for implementing various aspects of the data orchestration platform described herein. In embodiments, as described herein, the data orchestration platform may be communicatively connected to a network 905 (e.g., a network communication environment, Internet of Things network) including a set of information sources (e.g., sensors, users, devices). In certain embodiments, the system architecture 900 may be configured, managed, and structured using a management device 990 (e.g., computer, server, terminal, mobile device). The system architecture 900 may include an orchestration hub 920 configured to ingest data (e.g., set of raw data) from the information sources of the network 905. The orchestration hub 920 may be a software module or hardware component configured to monitor, collect, organize, and manage the data ingested from the network 905 (at least one data hub process receiving input data from the at least one first interface and/or the at least one ). In embodiments, as described herein, the orchestration hub 920 may be configured to map the raw data with a set of device attribute data and a set of connection data (e.g., using a set of information source profiles) to facilitate interpretation of the set of raw data (comprising at least one keying sub-process which provides keys to data segments of the input data creating keyed data segments (comprising at least one shared attribute keying mapping, tagging sub-process which provides mappings, keys, tags to data segments of the input data creating mapped, keyed, tagged interpreted data segments, see also Figure 4, para 75, 76-78). … In certain embodiments, the set of raw data may be transmitted directly to an orchestration database 980 (e.g., an AI-based storage system) for storage and categorization. As illustrated in FIG. 9, in certain embodiments, the set of raw data may be processed using a data interpretation dictionary 975 (e.g., lexical resource configure to extract meaning from the set of raw data) to generate a set of interpreted data. In embodiments, generating the set of interpreted data may include utilizing a set of acquisition status data 976 (e.g., data characterizing the context in which the set of raw data was ingested) and a set of re-optimization data (e.g., data defining how past data was optimized and interpreted). Subsequently, the set of interpreted data may undergo data normalization 960 to be generalized and formatted. As described herein, the set of interpreted data may be returned to the orchestration hub 920 to provide feedback for future data analysis, transmitted to the orchestration processing unit 950 for further processing (e.g., determination of a management action), or stored in the orchestration database 980 (wherein the at least one data hub process stores the keyed data segments in the at least one ). Other types of system architecture 900 are also possible. Tabuchi, Figs 1-4, 9 para 105, 75-78, 20, 60.
As an example, consider a situation in which a set of raw data including a value of “7.6” is collected by a sensor in a zoo aquarium. The set of raw data may be analyzed using the data interpretation dictionary, and a set of interpreted data may be generated that indicates that the value of “7.6” indicates pH data for the water in the zoo aquarium. Accordingly, an attribute of “Measurement Unit-pH” may be attached as metadata to the set of raw data, and the set of raw data and the set of metadata may be bundled together to generate the set of interpreted data (comprising at least one keying sub-process which provides keys to data segments of the input data creating keyed data segments (comprising at least one shared attribute keying tagging sub-process which provides keys, tags to data segments of the input data creating keyed, tagged interpreted data segments, see also Figure 4, para 75, 76,77-78)). Other methods of generating the set of interpreted data are also possible. Tabuchi, Figs 1-4, 9, para 76, 77, 75-78, 102, 105. In embodiments, at block 462, the set of interpreted data may be stored in an AI-based data storage system (wherein the at least one data hub process stores the keyed data segments in the at least one ). The set of interpreted data may be stored in the AI-based data storage system based on the set of attributes. Generally, storing can include saving, recording, collecting, aggregating, caching, or otherwise maintaining the set of interpreted data in the AI-based data storage system. The AI-based data storage system may include a database management system (DBMS), data repository, cloud storage, or other data maintenance method configured to use AI tools to facilitate recording, searching, and retrieving of stored data. In embodiments, storing the set of interpreted data may include using a machine learning technique to sort sets of interpreted data and group them according to their attributes (e.g., data type, semantic factor, time stamp, unit of measurement, confidence value, severity level). The sorted interpreted data may then be stored in the data storage system in association with the attributes to which they correspond. For example, sets of interpreted data associated with the same semantic factor (e.g., seismic activity anomaly detection) may be stored in the same partition of a database in association with a tag indicating the semantic factor to facilitate data retrieval (e.g., all data associated with a semantic factor of “seismic activity anomaly detection” may be easily searched for and returned). Other methods of storing the set of interpreted data in the AI-based data storage system are also possible (wherein the at least one data hub process stores the keyed data segments in the at least one ). Tabuchi, Figs 1-4, 9, para 77, 76, 75-78, 102, 105.
Aspects of the disclosure relate to storing, in an AI-based data storage system, the set of interpreted data in an output data type based on the set of attributes (wherein the at least one data hub process stores the keyed data segments in the at least one ).Tabuchi, Figs 1-4, 9 para 20, 60, 63, 77, 75-78, 105. The memory 304 may store all or a portion of the various programs, modules and data structures for processing data transfers as discussed herein. For instance, the memory 304 can store a data orchestration platform management application 350. In embodiments, the data orchestration platform management application 350 may include instructions or statements that execute on the processor 302 or instructions or statements that are interpreted by instructions or statements that execute on the processor 302 to carry out the functions as further described below. …. In embodiments, the data orchestration platform management application 350 may include data in addition to instructions or statements. Tabuchi, Figs 1-4, 9 para 60, 63, 20, 77, 75-78, 105.
As discussed above, Tabuchi does not specifically disclose the at least one shared memory.
However, Jampani teaches that, In an embodiment, cooperating threads can refer to a plurality of threads including instructions to perform the task and that exchange data through shared memory. In an embodiment, each task comprises one or more groups of related threads, which may be referred to as a warp. In an embodiment, a warp comprises a plurality of related threads (e.g., 32 threads) that can be executed in parallel. In an embodiment, cooperating threads can refer to a plurality of threads including instructions to perform the task and that exchange data through shared memory (at least one shared memory). Threads and cooperating threads are described in more detail, in accordance with one embodiment, in conjunction with FIG. 8A. Jampani, Fig 6, 7-10, para 58, 46, 47-48, 54, 74, 84, 94.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement the data orchestration platform of Tabuchi using executing in parallel a plurality of processes and at least one shared memory of Jampani, with a reasonable expectation of success in order to accelerate deep learning systems and applications. Jampani, para 48. This would have provided the user with the advantages of increased efficiency in the execution of multiple processes and exchanges of data.
- a plurality of processes in the form of computation modules wherein each computation module is configured to
(i.e., The data orchestration platform may leverage a data interpretation dictionary in tandem with a machine learning technique in order to interpret data regardless of the data source from which it was received. Subsequently, a machine learning model may govern selection of appropriate AI logic units to process the interpreted data based on the attributes with which the interpreted data is associated (a plurality of processes in the form of computation modules (AI logic units) wherein each computation module is configured to). Tabuchi, Figs 1-4, para 7, 18, 28, 78-79, 81-82, 84. As examples, the AI logic unit may include a natural language processing technique, image analysis technique, predictive analytics, statistical analysis, prescriptive analytics, market modeling, web analytics, security analytics, risk analytics, software analytics, and the like (a plurality of processes in the form of computation modules wherein each computation module is configured to). Tabuchi, Figs 1-4, para 78.)
* access the at least one
Tabuchi teaches that, In embodiments, at block 462, the set of interpreted data may be stored in an AI-based data storage system. The set of interpreted data may be stored in the AI-based data storage system based on the set of attributes. Generally, storing can include saving, recording, collecting, aggregating, caching, or otherwise maintaining the set of interpreted data in the AI-based data storage system. The AI-based data storage system may include a database management system (DBMS), data repository, cloud storage, or other data maintenance method configured to use AI tools to facilitate recording, searching, and retrieving of stored data. In embodiments, storing the set of interpreted data may include using a machine learning technique to sort sets of interpreted data and group them according to their attributes (e.g., data type, semantic factor, time stamp, unit of measurement, confidence value, severity level). The sorted interpreted data may then be stored in the data storage system in association with the attributes to which they correspond. For example, sets of interpreted data associated with the same semantic factor (e.g., seismic activity anomaly detection) may be stored in the same partition of a database in association with a tag indicating the semantic factor to facilitate data retrieval (e.g., all data associated with a semantic factor of “seismic activity anomaly detection” may be easily searched for and returned) (access the at least one for module-specific data segments, which are shared keyed data segments that are keyed with at least one key which is specific for at least one of the computation modules (to look and search for and return AI logic unit module specific interpreted data segments, which are shared attributes keyed and tagged data segments that are keyed, tagged, and associated with at least one attribute key tag which is specific, (specific, appropriate, suitable AI logic units, see also para 78-79) for at least one of the AI logic unit computation modules)). Other methods of storing the set of interpreted data in the AI-based data storage system are also possible. Tabuchi, Figs 1-4, para 77, 78, 79. Aspects of the disclosure relate to the recognition that, in some situations, it may be desirable to select an appropriate AI logic unit to process a set of interpreted data based on the attributes of the data. Herein, an AI logic unit may refer to a module, application, routine, algorithm, script, or other AI-based technique configured to examine, discover, interpret, transform, or process data to derive meaning or perform tasks. As examples, the AI logic unit may include a natural language processing technique, image analysis technique, predictive analytics, statistical analysis, prescriptive analytics, market modeling, web analytics, security analytics, risk analytics, software analytics, and the like. Tabuchi, Figs 1-4, para 78, 77-79.
[0079] In embodiments, determining the AI logic unit may include using the data orchestration platform management engine to compare the set of attributes associated with a particular set of interpreted data to a collection of profiles characterizing a variety of available AI logic units, assigning a suitability score to a plurality of the AI logic units (e.g., to indicate the fitness/appropriateness of that AI logic unit to process the data), and determining one or more AI logic units that achieve a suitability score threshold to perform the processing operation with respect to the set of interpreted data (access the at least one . For example, consider that a set of interpreted data is associated with a set of attributes of “data format: JPEG” and “data type: security camera image.” The data orchestration platform management engine may compare the set of interpreted data with a collection of available AI logic units of a natural language processing technique, a statistical analysis technique, an image analysis technique, and a sentiment analysis technique. In embodiments, the data orchestration platform management engine may assign a suitability score of 13 to the statistical analysis technique (e.g., as the set of interpreted data does not include statistics, statistical analysis may not be suitable), a suitability score of 89 for the image analysis technique (e.g., as the data is an image, image analysis is highly relevant), and a suitability score of “55” to the sentiment analysis technique (e.g., while potentially applicable, the data type of security image indicates a lower relevance for sentiment analysis). Subsequently, the data orchestration platform management engine may select an AI logic unit that achieves a suitability score threshold (e.g., the AI logic unit having the highest score, or an AI logic unit having a suitability score of 80 or more, for instance) as the AI logic unit to process the set of interpreted data. Other methods of determining the AI logic unit to process the set of interpreted data are also possible. Tabuchi, Figs 1-4, para 79, 78.
As discussed above, Tabuchi does not specifically disclose the at least one shared memory.
However, Jampani teaches that, In an embodiment, cooperating threads can refer to a plurality of threads including instructions to perform the task and that exchange data through shared memory. In an embodiment, each task comprises one or more groups of related threads, which may be referred to as a warp. In an embodiment, a warp comprises a plurality of related threads (e.g., 32 threads) that can be executed in parallel (execute in parallel a plurality of processes). In an embodiment, cooperating threads can refer to a plurality of threads including instructions to perform the task and that exchange data through shared memory (at least one shared memory). Threads and cooperating threads are described in more detail, in accordance with one embodiment, in conjunction with FIG. 8A. Jampani, Fig 6, 7-10, para 58, 46, 47-48, 54, 74, 84, 94.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement the data orchestration platform of Tabuchi using executing in parallel a plurality of processes and at least one shared memory of Jampani, with a reasonable expectation of success in order to accelerate deep learning systems and applications. Jampani, para 48. This would have provided the user with the advantages of increased efficiency in the execution of multiple processes and exchanges of data.
* execute a machine learning method on the module-specific data segments, said machine learning method comprising data interpretation and classification methods using at least one artificial neuronal network
(i.e., [0080] At block 480, the set of interpreted data may be processed using the AI logic unit. Generally, processing can include analyzing, converting, investigating, evaluating, modifying, or otherwise performing an operation on the set of interpreted data using the AI logic unit (execute a machine learning method on the module-specific data segments, said machine learning method comprising data interpretation and classification methods using at least one artificial neuronal network (execute AI logic units for predictive analysis techniques including classifiers using at least neural network machine learning method (see also para 96), on the AI logic unit module specific interpreted data segments, the interpreted data segments are shared attributes keyed and tagged data segments associated with at least one attribute key tag which is specific, appropriate, suitable for at least one of the AI logic unit computation modules, (see para 78-79)). In embodiments, processing may include using the determined AI logic unit to add or subtract attributes to the set of interpreted data (e.g., add additional measurement values to a table), updating the value of existing attributes of the set of interpreted data (e.g., change an existing record in a table based on a new measurement), using the set of interpreted data as an input for another operation (e.g., using a time value to calculate a velocity), extract a conclusion or inference from the set of interpreted data (e.g., an anomalous voltage value has occurred), converting the set of interpreted data to another type or format (e.g., converting a Fahrenheit temperature value to a Celsius temperature value), or the like. In particular, processing may include executing a statistical analysis technique, a machine learning technique, a data optimization technique, a predictive analysis technique, or other suitable analytics operation (execute a machine learning method on the module-specific data segments, said machine learning method comprising data interpretation and classification methods using at least one artificial neuronal network (execute AI logic unit module predictive analysis machine learning technique including classifiers using at least neural network (see para 96) on the AI logic unit module specific interpreted data segments, which are segments associated with at least one attribute key tag which is specific, appropriate, suitable for at least one of the AI logic unit computation modules (see para 78-79)). As an example, processing may include using a regression analysis technique to analyzing the statistical relationship between two sets of voltage measurements. Other methods of processing the set of interpreted data using the AI logic unit are also possible. Tabuchi, Figs 1-4, par 80, 81. [0081] …Accordingly, the data interpretation dictionary may attach a set of metadata indicating a set of attributes of “data type-solar cell measurement,” “data format-binary value,” and “unit of measurement-volts” to the set of raw data to generate a set of interpreted data. In response to generation of the set of interpreted data, an AI logic unit may be determined to process the set of interpreted data. For instance, the set of interpreted data may be compared to a variety of candidate AI logic units (e.g., natural language processing units, image analysis units, predictive analytics units (predictive analysis technique including classifiers, para 96), and it may be determined that a statistical analysis unit configured to derive relationships between the measured voltage value and past voltage values (e.g., to identify anomalies) has a suitability score that achieves a suitability score threshold for the set of interpreted data. Accordingly, the set of interpreted data may be processed using the determined statistical analysis unit. Other methods of managing the set of interpreted data are also possible. Tabuchi, Figs 1-4, para 81-83)
* output the result of the executed machine learning method to at least one of the at least one
Tabuchi teaches that, Subsequently, a machine learning model may govern selection of appropriate AI logic units to process the interpreted data based on the attributes with which the interpreted data is associated. Based on the results of the processing by the AI logic unit, a management operation may be performed with respect to the network communication environment to facilitate performance, efficiency, and reliability of subsequent data collection operations (output the result of the executed machine learning method to at least one of the at least one ) and at least one other computation module (at least one other subsequent management operation, data collection operations module)). Tabuchi, Figs 1-6, 9, para 7, 20, 99, 80, 81, 94.
Generally, performing can include initiating, executing, instantiating, implementing, accomplishing, enacting, or otherwise carrying-out the management operation. The management operation may include an action, process, procedure, policy, activity, or behavior to facilitate performance of the data orchestration platform. As examples, the management operation, may include reconfiguring a data orchestration device (e.g., updating firmware, changing settings), adding or removing a data orchestration device (e.g., removing a malfunctioning sensor, installing a new sensor), providing a notification (e.g., to a user or network administrator), routing data traffic (e.g., changing a data routing path) or the like (output the result of the executed machine learning method (output the result of the AI-logic unit machine learning classifier, para 7, 96) to at least one of the at least one . Tabuchi, Figs 1-6, 9, para 99, 80, 7, 20, 96, 81, 94.
As discussed above, Tabuchi does not specifically disclose the at least one shared memory.
However, Jampani teaches that, In an embodiment, cooperating threads can refer to a plurality of threads including instructions to perform the task and that exchange data through shared memory. In an embodiment, each task comprises one or more groups of related threads, which may be referred to as a warp. In an embodiment, a warp comprises a plurality of related threads (e.g., 32 threads) that can be executed in parallel. In an embodiment, cooperating threads can refer to a plurality of threads including instructions to perform the task and that exchange data through shared memory (at least one shared memory). Threads and cooperating threads are described in more detail, in accordance with one embodiment, in conjunction with FIG. 8A. Jampani, Fig 6, 7-10, para 58, 46, 47-48, 54, 74, 84, 94.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement the data orchestration platform of Tabuchi using executing in parallel a plurality of processes and at least one shared memory of Jampani, with a reasonable expectation of success in order to accelerate deep learning systems and applications. Jampani, para 48. This would have provided the user with the advantages of increased efficiency in the execution of multiple processes and exchanges of data.
Regarding claim 7, which depends from claim 1 and recites:
wherein at least part of the plurality of computation modules is configured to represent categorical constructions, preferably chosen from a group comprising at least: object, morphism, functor, commutative diagrams, non-commuting morphisms or functors, natural transformation, pullback, pushforward, projective limit, inductive limit, sub-object classifier.
Tabuchi in view of Jampani teaches the data processing device of claim 1, including the plurality of computation modules. Tabuchi teaches that, [0102] As illustrated in FIG. 7, the data processing pipeline 750 may include a series of assets 702 to 726 for performing processing operations on data. Generally, the assets may include AI-logic units configured to perform predetermined processing operations on data ingested by the data orchestration platform. In embodiments, the assets of the data processing pipeline 750 may be visual representations of the various software modules and hardware components for carrying out the aspects of the method for data orchestration platform management described herein. For instance, as shown in FIG. 7, the data processing pipeline 750 may include information source assets 702, 722 (e.g., assets representing a particular information source, group or class of devices), optimization assets 704, 724 (e.g., assets for generating sets of interpreted data from raw data), a storage asset 706 (e.g., an asset for temporarily storing the interpreted data based on its attributes), processing assets 708, 712 (e.g., assets for sorting, categorizing (at least part of the plurality of computation modules is configured to represent categorical constructions (computation modules configured to categorize and represent categorical constructions), preferably chosen from a group comprising at least: object, morphism, functor, commutative diagrams, non-commuting morphisms or functors, natural transformation, pullback, pushforward, projective limit, inductive limit, sub-object classifier (preferably but not necessarily chosen from a group comprising at least: object, morphism, functor, commutative diagrams, non-commuting morphisms or functors, natural transformation, pullback, pushforward, projective limit, inductive limit, sub-object classifier)), converting, and normalizing the set of interpreted data), a cloud analytics asset 710 (e.g., an asset for applying statistical or predictive analytics to the set of interpreted data), a big data asset 714 (e.g., an asset for generalizing, normalizing, and sharing insights from the data), and a machine learning asset 726 (e.g., an asset for applying machine learning techniques to the data and constructing a machine learning model). In embodiments, structuring the data processing pipeline 750 may include using the data orchestration platform management engine to automatically generate a recommended series of assets for processing of sets of raw data based on data processing pipelines utilized in the past for similar processing applications. In certain embodiments, structuring the data processing pipeline 750 may include providing a graphical user interface to a user or network administrator, and allowing the user/administrator to construct the data processing pipeline 750 using desired assets. Tabuchi, Fig 7, para 102.
Claim 12, which depends from claim 7 and recites:
wherein the data processing device is configured to attribute the same natural language description to parts of different images showing the same object.
Tabuchi in view of Jampani teaches the data processing device of claim 7. Tabuchi does not specifically disclose configured to attribute the same natural language description to parts of different images showing the same object.
However, Jampani teaches that, [0027] FIG. 3 illustrates a system 300 in which semantic consistency constraints are employed by a computer system 302 as part of part segmentation to encourage robustness to object variations, in accordance with one embodiment. The computer system implementing semantic consistency constraints is, in an embodiment, a system that trains a neural network based on an input image collection 304. In an embodiment, the input image collection 304 includes one or more images that are of a shared category. In an embodiment, the input image collection 304 includes sets of images from two or more categories of images. In an embodiment, a category refers to a classification of images such that the images share regions of commonality (e.g., all pictures of the same type of animal or object) (configured to attribute the same natural language description (configured to attribute the same natural language description category classification) to parts of different images showing the same object). Information relating to semantic meaning of objects and parts is embedded in intermediate convolutional neural network features of classification networks, in accordance with one embodiment, and a semantic consistency loss function taps into the hidden layer information (e.g., of ImageNet training features). In an embodiment, the computer system 302 analyzes the image to find representative features clusters of classification features corresponding to different part segments. Jampani, Fig 3, para 27, 14-18, 38, 47.
Regarding claim 14, Tabuchi teaches:
A computer implemented method for processing data (i.e., Tabuchi, Figs 1-6, 9, para 8), comprising:
- running at least one computing device which receives input data,
outputs output data and writes data into and reads data out from at least one shared memory device the at least one computing device executes in parallel a plurality of processes, said plurality of processes comprising (i.e.,
(i.e., The data orchestration platform 100 may be realized by software or hardware that automatically and dynamically monitors, controls, and manages devices, computer systems (running at least one computing device (a data processing computer system 300 device, see also Figure 3, para 57)), middleware, services, and other elements of the network communication environment 150. Here, the data orchestration platform 100 can implement the aspects of the present disclosure using methods such as IoT (Internet of Things) device management, AI data processing, machine learning, big data processing, and the like. Tabuchi, Figs 1-3, 9, para 45, 47, 23, 57, 65.
The major components of the computer system 300 include one or more processors 302, a memory 304, a terminal interface 312, a storage interface 314, an I/O (Input/Output) device interface 316, and a network interface 318, all of which are communicatively coupled, directly or indirectly, for inter-component communication via a memory bus 306, an I/O bus 308, bus interface unit 309, and an I/O bus interface unit 310 (running at least one computing device which receives input data, outputs output data and writes data into and reads data out from at least one )). Tabuchi, Figs 1-2, 3, para 57, 59, 63, 57-65. [0063] The storage interface 314 supports the attachment of one or more disk drives or direct access storage devices 322 (which are typically rotating magnetic disk drive storage devices, although they could alternatively be other storage devices, including arrays of disk drives configured to appear as a single large storage device to a host computer, or solid-state drives, such as flash memory). In some embodiments, the storage device 322 may be implemented via any type of secondary storage device (writes data into and reads data out from at least one ). The contents of the memory 304, or any portion thereof, may be stored to and retrieved from the storage device 322 as needed (writes data into and reads data out from at least one ). The I/O device interface 316 provides an interface to any of various other I/O devices or devices of other types, such as printers or fax machines. The network interface 318 provides one or more communication paths from the computer system 300 to other digital devices and computer systems; these communication paths may include, for example, one or more networks 330. Tabuchi, Figs 1-2, 3, para 63, 57, 59, 63, 57-65. [0058] The computer system 300 may contain one or more general-purpose programmable central processing units (CPUs) 302A and 302B, herein generically referred to as the processor 302. In embodiments, the computer system 300 may contain multiple processors; …. Each processor 302 executes instructions stored in the memory 304 and may include one or more levels of on-board cache. Tabuchi, Figs 1-4, 9, para 57-58, 59, 63-64, 57-65. [0064] Although the computer system 300 shown in FIG. 3 illustrates a particular bus structure providing a direct communication path among the processors 302, the memory 304, the bus interface 309, the display system 324, and the I/O bus interface unit 310, in alternative embodiments the computer system 300 may include different buses or communication paths, which may be arranged in any of various forms, such as point-to-point links in hierarchical, star or web configurations, multiple hierarchical buses, parallel and redundant paths, or any other appropriate type of configuration (the at least one computing device executes g (computing device 300 is configured plurality of processors to execute )). Furthermore, while the I/O bus interface unit 310 and the I/O bus 308 are shown as single respective units, the computer system 300 may, in fact, contain multiple I/O bus interface units 310 and/or multiple I/O buses 308. While multiple I/O interface units are shown, which separate the I/O bus 308 from various communications paths running to the various I/O devices, in other embodiments, some or all of the I/O devices are connected directly to one or more system I/O buses. Tabuchi, Figs 1-4, 9, para 64, 57-59, 57-65.
Thus, Tabuchi teaches that the at least one computing device is configured to execute a plurality of processors and at least one memory device. Tabuchi does not specifically disclose to execute in parallel a plurality of processes and at least one shared memory.
However Jampani teaches in the field related to Systems and methods to detect one or more segments of one or more objects within one or more images based, at least in part, on a neural network trained in an unsupervised manner to infer the one or more segments. Jampani, Abstract. Jampani, which is analogous to the claimed invention because is directed to computer systems, neural networks, processing segments and parallel processing and shared memory, teaches that, [0046] FIG. 6 illustrates a parallel processing unit (“PPU”) 600, in accordance with one embodiment. In an embodiment, the PPU 600 is configured with machine-readable code that, if executed by the PPU, causes the PPU to perform some or all of processes and techniques described throughout this disclosure. In an embodiment, the PPU 600 is a multi-threaded processor that is implemented on one or more integrated circuit devices and that utilizes multithreading as a latency-hiding technique designed to process computer-readable instructions (also referred to as machine-readable instructions or simply instructions) on multiple threads in parallel (execute in parallel a plurality of processes). In an embodiment, a thread refers to a thread of execution and is an instantiation of a set of instructions configured to be executed by the PPU 600. Jampani, Fig 6, 7-10, para 46, 47-48, 58, 54, 74, 84, 94. In an embodiment, cooperating threads can refer to a plurality of threads including instructions to perform the task and that exchange data through shared memory. In an embodiment, each task comprises one or more groups of related threads, which may be referred to as a warp. In an embodiment, a warp comprises a plurality of related threads (e.g., 32 threads) that can be executed in parallel (execute in parallel a plurality of processes). In an embodiment, cooperating threads can refer to a plurality of threads including instructions to perform the task and that exchange data through shared memory (at least one shared memory). Threads and cooperating threads are described in more detail, in accordance with one embodiment, in conjunction with FIG. 8A. Jampani, Fig 6, 7-10, para 58, 46, 47-48, 54, 74, 84, 94.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement the data orchestration platform of Tabuchi using executing in parallel a plurality of processes and at least one shared memory of Jampani, with a reasonable expectation of success in order to accelerate deep learning systems and applications. Jampani, para 48. This would have provided the user with the advantages of increased efficiency in the execution of multiple processes and exchanges of data.
- at least one data hub process receiving input data and comprising at least one keying sub-process which provides keys to data segments of the input data creating keyed data segments wherein the at least one data hub process stores the keyed data segments in the at least one
Tabuchi teaches that, Aspects of FIG. 9 relate to a system architecture 900 for implementing various aspects of the data orchestration platform described herein. In embodiments, as described herein, the data orchestration platform may be communicatively connected to a network 905 (e.g., a network communication environment, Internet of Things network) including a set of information sources (e.g., sensors, users, devices). In certain embodiments, the system architecture 900 may be configured, managed, and structured using a management device 990 (e.g., computer, server, terminal, mobile device). The system architecture 900 may include an orchestration hub 920 configured to ingest data (e.g., set of raw data) from the information sources of the network 905. The orchestration hub 920 may be a software module or hardware component configured to monitor, collect, organize, and manage the data ingested from the network 905 (at least one data hub process receiving input data (at least one data orchestration hub process receiving ingested input data from the at least one first interface and/or the ). In embodiments, as described herein, the orchestration hub 920 may be configured to map the raw data with a set of device attribute data and a set of connection data (e.g., using a set of information source profiles) to facilitate interpretation of the set of raw data (and comprising at least one keying sub-process which provides keys to data segments of the input data creating keyed data segments (comprising at least one shared attribute keying mapping, tagging sub-process which provides mappings, keys, tags to data segments of the input data creating mapped, keyed, tagged interpreted data segments, see also Figure 4, para 75, 76-78). … In certain embodiments, the set of raw data may be transmitted directly to an orchestration database 980 (e.g., an AI-based storage system) for storage and categorization. As illustrated in FIG. 9, in certain embodiments, the set of raw data may be processed using a data interpretation dictionary 975 (e.g., lexical resource configure to extract meaning from the set of raw data) to generate a set of interpreted data. In embodiments, generating the set of interpreted data may include utilizing a set of acquisition status data 976 (e.g., data characterizing the context in which the set of raw data was ingested) and a set of re-optimization data (e.g., data defining how past data was optimized and interpreted). Subsequently, the set of interpreted data may undergo data normalization 960 to be generalized and formatted. As described herein, the set of interpreted data may be returned to the orchestration hub 920 to provide feedback for future data analysis, transmitted to the orchestration processing unit 950 for further processing (e.g., determination of a management action), or stored in the orchestration database 980 (wherein the at least one data hub process stores the keyed data segments in the at least one ). Other types of system architecture 900 are also possible. Tabuchi, Figs 1-4, 9 para 105, 75-78, 20, 60.
As an example, consider a situation in which a set of raw data including a value of “7.6” is collected by a sensor in a zoo aquarium. The set of raw data may be analyzed using the data interpretation dictionary, and a set of interpreted data may be generated that indicates that the value of “7.6” indicates pH data for the water in the zoo aquarium. Accordingly, an attribute of “Measurement Unit-pH” may be attached as metadata to the set of raw data, and the set of raw data and the set of metadata may be bundled together to generate the set of interpreted data (comprising at least one keying sub-process which provides keys to data segments of the input data creating keyed data segments (comprising at least one shared attribute keying tagging sub-process which provides keys, tags to data segments of the input data creating keyed, tagged interpreted data segments, see also Figure 4, para 75, 76,77-78)). Other methods of generating the set of interpreted data are also possible. Tabuchi, Figs 1-4, 9, para 76, 77, 75-78, 102, 105. In embodiments, at block 462, the set of interpreted data may be stored in an AI-based data storage system (wherein the at least one data hub process stores the keyed data segments in the at least one ). The set of interpreted data may be stored in the AI-based data storage system based on the set of attributes. Generally, storing can include saving, recording, collecting, aggregating, caching, or otherwise maintaining the set of interpreted data in the AI-based data storage system. The AI-based data storage system may include a database management system (DBMS), data repository, cloud storage, or other data maintenance method configured to use AI tools to facilitate recording, searching, and retrieving of stored data. In embodiments, storing the set of interpreted data may include using a machine learning technique to sort sets of interpreted data and group them according to their attributes (e.g., data type, semantic factor, time stamp, unit of measurement, confidence value, severity level). The sorted interpreted data may then be stored in the data storage system in association with the attributes to which they correspond. For example, sets of interpreted data associated with the same semantic factor (e.g., seismic activity anomaly detection) may be stored in the same partition of a database in association with a tag indicating the semantic factor to facilitate data retrieval (e.g., all data associated with a semantic factor of “seismic activity anomaly detection” may be easily searched for and returned). Other methods of storing the set of interpreted data in the AI-based data storage system are also possible (wherein the at least one data hub process stores the keyed data segments in the at least one ). Tabuchi, Figs 1-4, 9, para 77, 76, 75-78, 102, 105.
Aspects of the disclosure relate to storing, in an AI-based data storage system, the set of interpreted data in an output data type based on the set of attributes (wherein the at least one data hub process stores the keyed data segments in the at least one ).Tabuchi, Figs 1-4, 9 para 20, 60, 63, 77, 75-78, 105. The memory 304 may store all or a portion of the various programs, modules and data structures for processing data transfers as discussed herein. For instance, the memory 304 can store a data orchestration platform management application 350. In embodiments, the data orchestration platform management application 350 may include instructions or statements that execute on the processor 302 or instructions or statements that are interpreted by instructions or statements that execute on the processor 302 to carry out the functions as further described below. …. In embodiments, the data orchestration platform management application 350 may include data in addition to instructions or statements. Tabuchi, Figs 1-4, 9 para 60, 63, 20, 77, 75-78, 105.
As discussed above, Tabuchi does not specifically disclose the at least one shared memory.
However, Jampani teaches that, In an embodiment, cooperating threads can refer to a plurality of threads including instructions to perform the task and that exchange data through shared memory. In an embodiment, each task comprises one or more groups of related threads, which may be referred to as a warp. In an embodiment, a warp comprises a plurality of related threads (e.g., 32 threads) that can be executed in parallel. In an embodiment, cooperating threads can refer to a plurality of threads including instructions to perform the task and that exchange data through shared memory (at least one shared memory). Threads and cooperating threads are described in more detail, in accordance with one embodiment, in conjunction with FIG. 8A. Jampani, Fig 6, 7-10, para 58, 46, 47-48, 54, 74, 84, 94.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement the data orchestration platform of Tabuchi using executing in parallel a plurality of processes and at least one shared memory of Jampani, with a reasonable expectation of success in order to accelerate deep learning systems and applications. Jampani, para 48. This would have provided the user with the advantages of increased efficiency in the execution of multiple processes and exchanges of data.
- a plurality of processes in the form of computation modules wherein each computation module
(i.e., The data orchestration platform may leverage a data interpretation dictionary in tandem with a machine learning technique in order to interpret data regardless of the data source from which it was received. Subsequently, a machine learning model may govern selection of appropriate AI logic units to process the interpreted data based on the attributes with which the interpreted data is associated (a plurality of processes in the form of computation modules (AI logic units) wherein each computation module). Tabuchi, Figs 1-4, para 7, 18, 28, 78-79, 81-82, 84. As examples, the AI logic unit may include a natural language processing technique, image analysis technique, predictive analytics, statistical analysis, prescriptive analytics, market modeling, web analytics, security analytics, risk analytics, software analytics, and the like (a plurality of processes in the form of computation modules wherein each computation module). Tabuchi, Figs 1-4, para 78.)
* accesses the at least one
Tabuchi teaches that, In embodiments, at block 462, the set of interpreted data may be stored in an AI-based data storage system. The set of interpreted data may be stored in the AI-based data storage system based on the set of attributes. Generally, storing can include saving, recording, collecting, aggregating, caching, or otherwise maintaining the set of interpreted data in the AI-based data storage system. The AI-based data storage system may include a database management system (DBMS), data repository, cloud storage, or other data maintenance method configured to use AI tools to facilitate recording, searching, and retrieving of stored data. In embodiments, storing the set of interpreted data may include using a machine learning technique to sort sets of interpreted data and group them according to their attributes (e.g., data type, semantic factor, time stamp, unit of measurement, confidence value, severity level). The sorted interpreted data may then be stored in the data storage system in association with the attributes to which they correspond. For example, sets of interpreted data associated with the same semantic factor (e.g., seismic activity anomaly detection) may be stored in the same partition of a database in association with a tag indicating the semantic factor to facilitate data retrieval (e.g., all data associated with a semantic factor of “seismic activity anomaly detection” may be easily searched for and returned) (accesses the at least one . Other methods of storing the set of interpreted data in the AI-based data storage system are also possible. Tabuchi, Figs 1-4, para 77, 78, 79. Aspects of the disclosure relate to the recognition that, in some situations, it may be desirable to select an appropriate AI logic unit to process a set of interpreted data based on the attributes of the data. Herein, an AI logic unit may refer to a module, application, routine, algorithm, script, or other AI-based technique configured to examine, discover, interpret, transform, or process data to derive meaning or perform tasks. As examples, the AI logic unit may include a natural language processing technique, image analysis technique, predictive analytics, statistical analysis, prescriptive analytics, market modeling, web analytics, security analytics, risk analytics, software analytics, and the like. Tabuchi, Figs 1-4, para 78, 77-79.
[0079] In embodiments, determining the AI logic unit may include using the data orchestration platform management engine to compare the set of attributes associated with a particular set of interpreted data to a collection of profiles characterizing a variety of available AI logic units, assigning a suitability score to a plurality of the AI logic units (e.g., to indicate the fitness/appropriateness of that AI logic unit to process the data), and determining one or more AI logic units that achieve a suitability score threshold to perform the processing operation with respect to the set of interpreted data (accesses the at least one . For example, consider that a set of interpreted data is associated with a set of attributes of “data format: JPEG” and “data type: security camera image.” The data orchestration platform management engine may compare the set of interpreted data with a collection of available AI logic units of a natural language processing technique, a statistical analysis technique, an image analysis technique, and a sentiment analysis technique. In embodiments, the data orchestration platform management engine may assign a suitability score of 13 to the statistical analysis technique (e.g., as the set of interpreted data does not include statistics, statistical analysis may not be suitable), a suitability score of 89 for the image analysis technique (e.g., as the data is an image, image analysis is highly relevant), and a suitability score of “55” to the sentiment analysis technique (e.g., while potentially applicable, the data type of security image indicates a lower relevance for sentiment analysis). Subsequently, the data orchestration platform management engine may select an AI logic unit that achieves a suitability score threshold (e.g., the AI logic unit having the highest score, or an AI logic unit having a suitability score of 80 or more, for instance) as the AI logic unit to process the set of interpreted data. Other methods of determining the AI logic unit to process the set of interpreted data are also possible. Tabuchi, Figs 1-4, para 79, 78.
As discussed above, Tabuchi does not specifically disclose the at least one shared memory.
However, Jampani teaches that, In an embodiment, cooperating threads can refer to a plurality of threads including instructions to perform the task and that exchange data through shared memory. In an embodiment, each task comprises one or more groups of related threads, which may be referred to as a warp. In an embodiment, a warp comprises a plurality of related threads (e.g., 32 threads) that can be executed in parallel (execute in parallel a plurality of processes). In an embodiment, cooperating threads can refer to a plurality of threads including instructions to perform the task and that exchange data through shared memory (at least one shared memory). Threads and cooperating threads are described in more detail, in accordance with one embodiment, in conjunction with FIG. 8A. Jampani, Fig 6, 7-10, para 58, 46, 47-48, 54, 74, 84, 94.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement the data orchestration platform of Tabuchi using executing in parallel a plurality of processes and at least one shared memory of Jampani, with a reasonable expectation of success in order to accelerate deep learning systems and applications. Jampani, para 48. This would have provided the user with the advantages of increased efficiency in the execution of multiple processes and exchanges of data.
* executes a machine learning method on the module-specific data segments, said machine learning method comprising data. interpretation and classification methods using at least one artificial neuronal network if a module-specific data. segment is present and runs idle if no module-specific data segment is present
(i.e., [0080] At block 480, the set of interpreted data may be processed using the AI logic unit. Generally, processing can include analyzing, converting, investigating, evaluating, modifying, or otherwise performing an operation on the set of interpreted data using the AI logic unit (executes a machine learning method on the module-specific data segments, said machine learning method comprising data interpretation and classification methods using at least one artificial neuronal network if a module-specific data. segment is present and runs idle if no module-specific data segment is present (if the module-specific interpreted data segment is present, the processing executes AI logic units for predictive analysis techniques including classifiers using at least neural network machine learning method (see also para 96), on the AI logic unit module specific interpreted data segments, the interpreted data segments are shared attributes keyed and tagged data segments associated with at least one attribute key tag which is specific, appropriate, suitable for at least one of the AI logic unit computation modules, (see para 78-79) and processing runs idle and does not execute AI logic units if no module-specific interpreted data segment is present)). In embodiments, processing may include using the determined AI logic unit to add or subtract attributes to the set of interpreted data (e.g., add additional measurement values to a table), updating the value of existing attributes of the set of interpreted data (e.g., change an existing record in a table based on a new measurement), using the set of interpreted data as an input for another operation (e.g., using a time value to calculate a velocity), extract a conclusion or inference from the set of interpreted data (e.g., an anomalous voltage value has occurred), converting the set of interpreted data to another type or format (e.g., converting a Fahrenheit temperature value to a Celsius temperature value), or the like. In particular, processing may include executing a statistical analysis technique, a machine learning technique, a data optimization technique, a predictive analysis technique, or other suitable analytics operation (executes a machine learning method on the module-specific data segments, said machine learning method comprising data interpretation and classification methods using at least one artificial neuronal network if a module-specific data. segment is present and runs idle if no module-specific data segment is present (if the module-specific interpreted data segment is present, the processing executes AI logic unit module predictive analysis machine learning technique including classifiers using at least neural network (see para 96) on the AI logic unit module specific interpreted data segments, which are segments associated with at least one attribute key tag which is specific, appropriate, suitable for at least one of the AI logic unit computation modules (see para 78-79) and processing runs idle and does not execute AI logic units if no module-specific interpreted data segment is present)). As an example, processing may include using a regression analysis technique to analyzing the statistical relationship between two sets of voltage measurements. Other methods of processing the set of interpreted data using the AI logic unit are also possible. Tabuchi, Figs 1-4, par 80, 81. [0081] …Accordingly, the data interpretation dictionary may attach a set of metadata indicating a set of attributes of “data type-solar cell measurement,” “data format-binary value,” and “unit of measurement-volts” to the set of raw data to generate a set of interpreted data. In response to generation of the set of interpreted data, an AI logic unit may be determined to process the set of interpreted data. For instance, the set of interpreted data may be compared to a variety of candidate AI logic units (e.g., natural language processing units, image analysis units, predictive analytics units (predictive analysis technique including classifiers, para 96), and it may be determined that a statistical analysis unit configured to derive relationships between the measured voltage value and past voltage values (e.g., to identify anomalies) has a suitability score that achieves a suitability score threshold for the set of interpreted data. Accordingly, the set of interpreted data may be processed using the determined statistical analysis unit. Other methods of managing the set of interpreted data are also possible. Tabuchi, Figs 1-4, para 81-83)
* outputs the result of the executed machine learning method to at least one of the at least one
Tabuchi teaches that, Subsequently, a machine learning model may govern selection of appropriate AI logic units to process the interpreted data based on the attributes with which the interpreted data is associated. Based on the results of the processing by the AI logic unit, a management operation may be performed with respect to the network communication environment to facilitate performance, efficiency, and reliability of subsequent data collection operations (outputs the result of the executed machine learning method to at least one of the at least one ) and at least one other computation module (at least one other subsequent management operation, data collection operations computations module)). Tabuchi, Figs 1-6, 9, para 7, 20, 99, 80, 81, 94.
Generally, performing can include initiating, executing, instantiating, implementing, accomplishing, enacting, or otherwise carrying-out the management operation. The management operation may include an action, process, procedure, policy, activity, or behavior to facilitate performance of the data orchestration platform. As examples, the management operation, may include reconfiguring a data orchestration device (e.g., updating firmware, changing settings), adding or removing a data orchestration device (e.g., removing a malfunctioning sensor, installing a new sensor), providing a notification (e.g., to a user or network administrator), routing data traffic (e.g., changing a data routing path) or the like (outputs the result of the executed machine learning method (output the result of the AI-logic unit machine learning classifier, para 7, 96) to at least one of the at least one other computation module (management operation computation module)). Tabuchi, Figs 1-6, 9, para 99, 80, 7, 20, 96, 81, 94.
As discussed above, Tabuchi does not specifically disclose the at least one shared memory.
However, Jampani teaches that, In an embodiment, cooperating threads can refer to a plurality of threads including instructions to perform the task and that exchange data through shared memory. In an embodiment, each task comprises one or more groups of related threads, which may be referred to as a warp. In an embodiment, a warp comprises a plurality of related threads (e.g., 32 threads) that can be executed in parallel. In an embodiment, cooperating threads can refer to a plurality of threads including instructions to perform the task and that exchange data through shared memory (at least one shared memory). Threads and cooperating threads are described in more detail, in accordance with one embodiment, in conjunction with FIG. 8A. Jampani, Fig 6, 7-10, para 58, 46, 47-48, 54, 74, 84, 94.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement the data orchestration platform of Tabuchi using executing in parallel a plurality of processes and at least one shared memory of Jampani, with a reasonable expectation of success in order to accelerate deep learning systems and applications. Jampani, para 48. This would have provided the user with the advantages of increased efficiency in the execution of multiple processes and exchanges of data.
Claims 20 and 25 recite methods that parallel the devices of claims 7 and 12, respectively. Therefore, the analysis discussed above with respect to claims 7 and 12 also applies to claims 20 and 25, respectively. Accordingly, claims 20 and 25 are rejected based substantially on the same rationale as set forth above with respect to claims 7 and 12, respectively.
Claim 27 recites a computer program that parallels the device of claim 1. Therefore, the analysis discussed above with respect to claim 1 also applies to claim 27. Accordingly, claim 27 is rejected based substantially on the same rationale as set forth above with respect to claim 1. More specifically regarding A computer program which when the program is executed by a data processing device, causes the data processing device to be configured (i.e., Tabuchi, para 24-27, 42, 108-113).
Claim(s) 2-3 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Tabuchi in view of Jampani as applied to claims 1 and 14 above, and further in view of Chilimbi et al. (Publication No. US 2015/0324690 A1, published November 12, 2015) hereinafter Chilimbi. The Examiner notes that Chilimbi is cited on Applicant’s Information Disclosure Statement filed on January 5, 2024.
Regarding claim 2, which depends from claim 1 and recites:
wherein at least part of the plurality of computation modules is formed by computation modules having a hierarchical vertical structure with layers and/or at least part of the plurality of computation modules is formed into a horizontal structure by way of computational groups
Tabuchi in view of Jampani teaches the data processing device of claim 1, including the plurality of computation modules. Tabuchi in view of Jampani does not specifically disclose the plurality of computation modules formed by computation modules having a hierarchical vertical structure with layers and/or at least part of the plurality of computation modules is formed into a horizontal structure by way of computational groups.
However, Chilimbi teaches in the field related to machine learning. Chilimbi, para 2-7. Chilimbi, which is analogous to the claimed invention because Chilimbi is directed to machine learning, teaches that, In some embodiments, models for vision tasks typically contain a number of convolutional layers followed by a few fully connected layers. In at least one embodiment, the models may be partitioned vertically across the model worker machines as shown in FIG. 7 (computation modules having a hierarchical vertical structure with layers and/or at least part of the plurality of computation modules is formed into a horizontal structure by way of computational groups) As shown in FIG. 7, the models may be partitioned such that neurons in each of the layers are within a predetermined vertical distance to neurons in neighboring layers. Partitioning the models vertically across the replicas 704A-704N representing groups of the model worker machines may minimize the amount of cross-machine communication between the convolution layers. Chilimbi, Fig 7, para 56, 102, 113.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement the data orchestration platform of Tabuchi using executing in parallel a plurality of processes and at least one shared memory of Jampani and the computation modules having a hierarchical vertical structure with layers and/or at least part of the plurality of computation modules is formed into a horizontal structure by way of computational groups of Chilimbi, with a reasonable expectation of success in order to accelerate deep learning systems and applications and provide computation and communication optimizations that improve system efficiency and scaling of large neural networks.. Jampani, para 48. Chilimbi, Abstract, para 9, 56.This would have provided the user with the advantages of increased efficiency in the execution of multiple processes and exchanges of data and optimization of computation and communication of neural networks.
Regarding claim 3, which depends from claim 1 and recites:
wherein at least one routing process is provided which directs output provided by at least one of the computation modules to at least one other computation module and/or the shared memory device.
Tabuchi in view of Jampani teaches the data processing device of claim 1, including the plurality of computation modules and output the result of the executed machine learning method provided by at least one of the computation modules to at least one other computation module and/or the shared memory device. Tabuchi in view of Jampani does not specifically disclose at least one routing process is provided which directs output provided by at least one of the computation modules to at least one other computation module and/or the shared memory device.
However, Chilimbi teaches that, As shown in FIG. 2, computing machines called neurons (e.g., v.sub.1, v.sub.2, v.sub.3, etc.) associated with the first layer 202 receive an input 204. The first layer 202 represents the input layer. Each of the individual neurons in the first layer 202 outputs a single output to each of the neurons in the second layer 206 of neurons via connections between the neurons in each layer (at least one routing process (at least one connection routing process) is provided which directs output provided by at least one of the computation modules (at least one of the neuron computation modules in the first layer) to at least one other computation module (to at least another neuron computation module in the second layer) and/or the shared memory device). The second layer 206 represents a layer for learning low-level features. Accordingly, each neuron in the second layer 206 receives multiple inputs and outputs a single output to each of the neurons in the third layer 208. The third layer 208 represents a layer for learning mid-level features. A same process happens for layer 210, which represents a layer for learning high-level features, and layer 212, which represents a layer for learning desired outputs. In layer 212, the output comprises a label 214 representative of the input 204.Chilimbi, Fig 2, para 4, 28,30, 33
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement the data orchestration platform of Tabuchi using executing in parallel a plurality of processes and at least one shared memory of Jampani and the computation modules having a hierarchical vertical structure with layers and/or at least part of the plurality of computation modules is formed into a horizontal structure by way of computational groups and at least one routing process is provided which directs output provided by at least one of the computation modules to at least one other computation module and/or the shared memory device of Chilimbi, with a reasonable expectation of success in order to accelerate deep learning systems and applications and provide computation and communication optimizations that improve system efficiency and scaling of large neural networks. Jampani, para 48. Chilimbi, Abstract, para 9, 56.This would have provided the user with the advantages of increased efficiency in the execution of multiple processes and exchanges of data and optimization of computation and communication of neural networks.
Claims 15-16 recite methods that parallel the devices of claims 2-3, respectively. Therefore, the analysis discussed above with respect to claims 2-3 also applies to claims 15-16, respectively. Accordingly, claims 15-16 are rejected based substantially on the same rationale as set forth above with respect to claims 2-3, respectively.
Claim(s) 4 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Tabuchi in view of Jampani as applied to claims 1 and 14 above, and further in view of Cilingir et al. (Publication No. US 2018/0075877 A1, published March 15, 2018) hereinafter Cilingir.
Regarding claim 4, which depends from claim 1 and recites:
wherein the at least one data hub process comprises at least one segmentation subprocess which segments input data into data segments and keeps information which shared keyed data segments were segmented from the same input data.
Tabuchi in view of Jampani teaches the data processing device of claim 1, including the at least one data hub process and shared keyed data segments. Tabuchi in view of Jampani does not specifically disclose segmentation subprocess which segments input data into data segments and keeps information which shared keyed data segments were segmented from the same input data.
However, Cilingir teaches in the field related to video data management, summarization and speaker recognition models, Cilingir, para 1.Cilingir, which is analogous to the claimed invention because Cilingir is directed to data segmentation processes, teaches that, Video summarization system 100 is configured to perform video summarization based on speaker segmentation and clustering to identify persons and scenes of interest. The video may be provided by any suitable video stream source 860, such as, for example, a video player or internet streaming source. Audio segments from the video, in which the voice of a single speaker is detected, are grouped or clustered together (segmentation subprocess which segments input data into data segments and keeps information which shared keyed data segments were segmented from the same input data (segmentation subprocess which segments input audio data into single speaker audio data segments and keeps, groups, and clusters together information which shared keyed speaker audio data segments were segmented from the same audio data)). Portions of these clustered audio segments are provided to a user for identification of the speaker as a person of interest. The video can then be summarized as a combination of scenes that include the speaker of interest. Video summarization system 100 may include any or all of the components illustrated in FIGS. 2-6, as described above. Video summarization system 100 can be implemented or otherwise used in conjunction with a variety of suitable software and/or hardware that is coupled to or that otherwise forms a part of platform 810. Video summarization system 100 can additionally or alternatively be implemented or otherwise used in conjunction with user I/O devices that are capable of providing information to, and receiving information and commands from, a user. These I/O devices may include any number or combination of devices collectively referred to as user interface 202. In some embodiments, user interface 202 may include a textual input device such as a keyboard, and a pointer-based input device such as a mouse. Other input/output devices that may be used in other embodiments include a display element, touchscreen, a touchpad, speaker and/or a microphone. Still other input/output devices can be used in other embodiments. Cilingir, Figs 1, 2, para 49, 16-18, 25-26.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement the data orchestration platform of Tabuchi using executing in parallel a plurality of processes and at least one shared memory of Jampani and the segmentation subprocess which segments input data into data segments and keeps information which shared keyed data segments were segmented from the same input data of Cilingir, with a reasonable expectation of success in order to accelerate deep learning systems and applications and to provide data management technologies as data collections grow in size. Jampani, para 48. Cilingir, abstract, Figs 1, 2, para 1, 49, 16-18, 25-26..This would have provided the user with the advantages of increased efficiency in the execution of multiple processes and exchanges of data.
Claim 17 recites a method that parallels the device of claim 4. Therefore, the analysis discussed above with respect to claim 4 also applies to claim 17. Accordingly, claim 17 is rejected based substantially on the same rationale as set forth above with respect to claim 4.
Claim(s) 6 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Tabuchi in view of Jampani as applied to claims 1 and 14 above, and further in view of Milletari et al. (Patent No. US 11,804,050 B1, filed October 31, 2019) hereinafter Milletari.
Regarding claim 6, which depends from claim 1 and recites:
wherein the data processing device is configured to repeatedly check the weights of synapses of neuronal networks of at least part of, preferably all of, the plurality of computation modules to make sure they do not diverge.
Tabuchi in view of Jampani teaches the data processing device of claim 1, including the plurality of computation modules and neuronal networks. Tabuchi in view of Jampani does not specifically disclose repeatedly check the weights of synapses of neuronal networks of at least part of the computation modules to make sure they do not diverge.
However, Milletari teaches in the field related to machine learning models and parameters. Milletari, Abstract, col 1:8-25. Milletari, which is analogous to the claimed invention because Milletari is directed to generating neural networks and comparing parameter values, teaches that, (64) In at least one embodiment, machine learning model(s) 108 is collaboratively training over multiple iterations. In at least one embodiment, each iteration may include each training node 102 individually training a corresponding machine learning model(s) 106 using a corresponding model trainer 110 to determine respective sets of values of parameters of machine learning models 106 (repeatedly check (repeatedly over multiple iterations check and determine) the weights (weights parameters) of synapses of neuronal networks of at least part of the computation modules to make sure they do not diverge). In at least one embodiment, at an outset of one or more iterations, each of machine learning models 106 may include or be a same machine learning model with a same set of values for parameters, which may diverge through individual training using a model trainer 110 (repeatedly check (check and determine) the weights (weights parameters) of synapses of neuronal networks of at least part of the computation modules to make sure they do not diverge). In at least one embodiment, each iteration may include training performed by each model trainer 110 occurring for a predetermined period of time, such as a training epoch, which may be a same or different training epoch used by different model trainers 110. In at least one embodiment, each iteration may include after a predetermined period of time, machine learning models 106 and/or portions thereof being provided to interface manager 120 of training aggregator 104 by a corresponding model manager 112. Milletari, Abstract, Fig 1A-B, col 5:13-33, col 3:66-col 4:8.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement the data orchestration platform of Tabuchi using executing in parallel a plurality of processes and at least one shared memory of Jampani and the feature to repeatedly check the weights (weights parameters) of synapses of neuronal networks of at least part of the computation modules to make sure they do not diverge of Milletari, with a reasonable expectation of success in order to accelerate deep learning systems and applications and to prevent mistakes in data pre-processing, bugs, wrong hyper-parameter choices, deliberate adversarial actions, or other characteristics associated with a node that may negatively influence the quality of the collectively trained model. Jampani, para 48. Milletari, col 1:8-25. This would have provided the user with the advantages of increased efficiency and accuracy in the execution of multiple processes and exchanges of data.
Claim 19 recites a method that parallels the device of claim 6. Therefore, the analysis discussed above with respect to claim 6 also applies to claim 19. Accordingly, claim 19 is rejected based substantially on the same rationale as set forth above with respect to claim 6.
Claim(s) 10, 13, 23 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Tabuchi in view of Jampani as applied to claims 7 and 20 above, and further in view of Itou et al. (Publication No. US 2019/0073567 A1, published March 7, 2019) hereinafter Itou.
Regarding claim 10, which depends from claim 7 and recites:
wherein a random signal generator is configured to input random signals to at least some of the artificial neurons of at least one of the neuronal networks of at least some of the computation modules and wherein it is preferably provided that the random signals are used to create new concepts, in particular preferably by using projective limits.
Tabuchi in view of Jampani teaches the data processing device of claim 7, including the plurality of computation modules and neuronal networks. Tabuchi in view of Jampani does not specifically disclose wherein a random signal generator is configured to input random signals to at least some of the artificial neurons of at least one of the neuronal networks of at least some of the computation modules and wherein it is preferably provided that the random signals are used to create new concepts, in particular preferably by using projective limits.
However, Itou teaches in the field related to learning device, learning method, and storage medium. Itou, para 2. Itou, which is analogous to the claimed invention because Itou is directed to a learning device, neural networks, and classification of images into categories, teaches that, [0087] Subsequently, the learning-processor 116 generates new random numbers on the basis of random numbers used when the extracted generated image data IMG.sub.I has been generated (step S506). For example, it is assumed that generated image data IMG.sub.I generated using a random number of “5” is distributed at the position closest to the identification boundary when a plurality of pieces of generated image data IMG.sub.I are generated using values such as 1, 2, 3, . . . as random numbers in a number range for which upper and lower limits have been determined. In this case, the learning-processor 116 uses values in a dimension having a finer pitch, such as “5.1” and “4.9,” as new random numbers on the basis of the value “5.” [0088] Thereafter, the learning-processor 116 inputs the newly generated random numbers to the input layer of the first neural network NN (a random signal generator is configured to input random signals to at least some of the artificial neurons of at least one of the neuronal networks of at least some of the computation modules and wherein it is preferably provided that (preferably provided, but not required that) the random signals are used to create new concepts, in particular preferably by using projective limits) which is the generator 210 to generate new generated image data IMG.sub.I (step S508). Itou, para 87-88.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement the data orchestration platform of Tabuchi using executing in parallel a plurality of processes and at least one shared memory of Jampani and a random signal generator is configured to input random signals to at least some of the artificial neurons of at least one of the neuronal networks of at least some of the computation modules and wherein it is preferably provided that the random signals are used to create new concepts, in particular preferably by using projective limits of Itou, with a reasonable expectation of success in order to accelerate deep learning systems and applications and to improve the learning accuracy of machine learning. Jampani, para 48. Itou, para 87-88.This would have provided the user with the advantages of increased efficiency in the execution of multiple processes and exchanges of data and improving machine learning accuracy.
Regarding claim 13, which depends from claim 1 and recites:
wherein the data processing device is configured:- to do supervised and unsupervised learning - to use new concepts created by using random signals in supervised and unsupervised learning.
Tabuchi in view of Jampani teaches the data processing device of claim 1, including the plurality of computation modules and neuronal networks. Tabuchi teaches that, As examples, the machine learning engine may be configured to utilize rule-based learning techniques, deep-learning techniques, dimensionality reduction methods, ensemble learning techniques, instance-based algorithms, regression analysis, supervised learning techniques (supervised learning), Bayesian networks, artificial neural networks, decisions trees, cluster analysis (unsupervised learning), anomaly detection, reinforcement learning, or a combination of these (supervised and unsupervised learning) and other techniques. Tabuchi, para 85. supervised learning. Tabuchi in view of Jampani does not specifically disclose to use new concepts created by using random signals in learning.
However, Itou teaches in the field related to learning device, learning method, and storage medium. Itou, para 2. Itou, which is analogous to the claimed invention because Itou is directed to a learning device, neural networks, and classification of images into categories, teaches that, [0087] Subsequently, the learning-processor 116 generates new random numbers on the basis of random numbers used when the extracted generated image data IMG.sub.I has been generated (step S506). For example, it is assumed that generated image data IMG.sub.I generated using a random number of “5” is distributed at the position closest to the identification boundary when a plurality of pieces of generated image data IMG.sub.I are generated using values such as 1, 2, 3, . . . as random numbers in a number range for which upper and lower limits have been determined. In this case, the learning-processor 116 uses values in a dimension having a finer pitch, such as “5.1” and “4.9,” as new random numbers on the basis of the value “5.” [0088] Thereafter, the learning-processor 116 inputs the newly generated random numbers to the input layer of the first neural network NN which is the generator 210 to generate new generated image data IMG.sub.I (step S508) (to use new concepts (new image concepts) created by using random signals in learning). Itou, para 87-88.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement the data orchestration platform of Tabuchi using executing in parallel a plurality of processes and at least one shared memory of Jampani and to use new concepts created by using random signals in learning of Itou, with a reasonable expectation of success in order to accelerate deep learning systems and applications and to improve the learning accuracy of machine learning. Jampani, para 48. Itou, para 87-88.This would have provided the user with the advantages of increased efficiency in the execution of multiple processes and exchanges of data and improving machine learning accuracy.
Claims 23 and 26 recite methods that parallel the devices of claims 10 and 13, respectively. Therefore, the analysis discussed above with respect to claims 10 and 13 also applies to claims 23 and 26, respectively. Accordingly, claims 23 and 26 are rejected based substantially on the same rationale as set forth above with respect to claims 10 and 13, respectively.
Allowable Subject Matter
Claims 5, 8-9, 11, 18, 21-22, and 24 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if all objections and rejections as being indefinite are overcome.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US-20200004891-A1, US-20160132787-A1.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BARBARA LEVEL whose telephone number is (303)297-4748. The examiner can normally be reached Monday through Friday 8:00 AM - 5:00 PM MT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BARBARA M LEVEL/ Examiner, Art Unit 2142