DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant’s election without traverse of 6-8 in the reply filed on12/2/2025 is acknowledged.
Priority
Applicant claims the benefit of prior-filed a U.S. Patent Application No. 14/907,503, filed January 25, 2016, which is the U.S. National Stage of International Application No. PCT/JP2015/068459, filed June 26, 2015, which in turn claims the benefit of Japanese Patent Application No. 2015-115532, filed June 8, 2015. Applicant claim for benefit is acknowledged.
Drawings
The drawings were received on 09/01/2022. These drawings are acceptable.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on the following date(s): 9/01/2022 has (have) been considered by the examiner.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are listed below, where the generic place holder is in bold font and the functional language is italicized:
Claim 2:
A system for execution of a neural network, comprising: at least one first device and at least one second device configured to: communicate with each other through a communication network; and execute the neural network, wherein at least one processor of the at least one first device is configured to execute a first part of the neural network stored in at least one memory of the at least one first device, and at least one processor of the at least one second device is configured to execute a second part of the neural network stored in at least one memory of the at least one second device.
Claim 6:
wherein the at least one first device is configured to transmit resultant data of an execution of the first part on the at least one first device to the at least one second device.
Claim 9:
wherein the at least one second device is configured to transmit another resultant data of an execution of the second part on the at least one second device to the at least one first device.
Claim 10:
wherein a third part of the neural network is stored in the at least one memory of the at least one first device, and the at least one processor of the at least one first device is configured to execute the third part of the neural network on the at least one first device based on at least the another resultant data of the execution of the second part on the at least one second device.
Claim 14:
wherein the at least one second device is a device communicating with a plurality of the at least one first devices.
Claim 20:
The system according to claim 6, that is configured to communicate with the at least one second device, comprising: at least one memory configured to store at least a first part of the neural network; and at least one processor configured to: execute the first part of the neural network stored in the least one memory; and transmit resultant data of an execution of the first part of the neural network to the at least one second device, wherein the resultant data transmitted to the at least one second device is used by the at least one second device to execute a second part of the neural network on the at least one second device, the first part of the neural network and the second part of the neural network being different layers of the neural network.
Claim 23:
wherein the at least one device is configured to acquire sensor data to be processed by the neural network, the first part of the neural network includes an input layer of the neural network, and the second part of the neural network includes a layer later than the first part of the neural network.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 2, 6-17, 20-27 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding claims 2, 6-10, 14, 20, 23, the claim limitations noted above invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Specifically, computer-implemented functional claim limitations invoking claim interpretation under 35 USC 112-f, that is found to be indefinite under 35 U.S.C. 112(b) based on failure of the specification to disclose corresponding structure, material or act that performs the entire claimed function also lacks adequate written description and are consider to not sufficiently enabled to support the full scope of the claim. See MPEP 22181-IV. Therefore, the specification lacks written description under section 112(a). See MPEP § 2163.03, subsection VI.
Regarding the dependent claims of claim 2, the claims fail to resolve the noted deficiency and are thus rejected under the same rationale noted above.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 2, 6-7, 9-10, 12-14, 16-27, rejected on the ground of nonstatutory double patenting as being unpatentable over claims 11, 68, 70, 72 and 90 of U.S. Patent No. 11475289, hereinafter ‘RefDoc’. Although the claims at issue are not identical, they are not patentably distinct from each other because claims in the RefDoc are narrow in scope, and anticipate the broader claims in the instant case. See table summary below:
U.S. Application No. 17901216
Examiner notes:
U.S. Patent No. 11475289
(Reference Patent, hereinafter ‘RefDoc’)
Claim 2
A system for execution of a neural network, comprising: at least one first device and at least one second device configured to: communicate with each other through a communication network;
and execute the neural network, wherein at least one processor of the at least one first device is configured to execute a first part of the neural network stored in at least one memory of the at least one first device, and at least one processor of the at least one second device is configured to execute a second part of the neural network stored in at least one memory of the at least one second device.
Examiner notes that the RefDoc limitations, noted on right, teaches limitation, anticipates the limitations in the instant case
The RefDoc anticipated the boarder limitation in the instant claim limitation(s).
RefDoc teaches:
apparatus as claimed system
first machinery as claimed first device
second machinery as claimed second device
first connection from said first machinery to said apparatus … via a second connection and processor for sending information, as claimed communication network
first & second memory as claimed memory of at least one first or second device
Claim 11
An apparatus comprising: at least one memory storing therein an intermediate neural network model; and at least one processor configured to execute the said intermediate neural network model, wherein said intermediate neural network model is respectively inputted: first information based on information outputted by a first neural network model included in a first machinery which is based on information from a first sensor included in said first machinery, wherein said first machinery incorporates said first sensor, a first memory storing said first neural network model, and a first processor configured to execute said first neural network model; and second information based on information outputted by a third neural network model included in a second machinery which is based on information from a second sensor included in said second machinery, wherein said second machinery incorporates said second sensor, a second memory storing said third neural network model, and a second processor configured to execute said third neural network model; wherein said first and second memories are distinct from said at least one memory, and said first and second processors are distinct from said at least one processor; wherein said apparatus, said first machinery and said second machinery are different from each other; and wherein said at least one processor is configured to execute said intermediate neural network model, so that said intermediate neural network model is at least capable of outputting information based on either(i) inputting said first information via a first connection from said first machinery to said apparatus or (ii) inputting said second information via a second connection from said second machinery to said apparatus.
Claim 6
wherein the at least one first device is configured to transmit resultant data of an execution of the first part on the at least one first device to the at least one second device.
The RefDoc anticipated the boarder limitation in the instant claim limitation(s).
RefDoc teaches:
As noted in claim 2 rejection:
first machinery as claimed first device
second machinery as claimed second device
a first neural network model as first part of device
intermediate model configured to communicate data from first device to second device as noted in claim 11 and 70 limitations
in claim 70: fourth information, outputted based on said second information inputted into said intermediate neural network model, is transmitted to said second machinery, as claimed transmitted resultant data
Claims 11
...and at least one processor configured to execute the said intermediate neural network model, wherein said intermediate neural network model is respectively inputted: first information based on information outputted by a first neural network model included in a first machinery which is based on information from a first sensor included in said first machinery, wherein said first machinery incorporates said first sensor, a first memory storing said first neural network model, and a first processor configured to execute said first neural network model; and second information based on information outputted by a third neural network model included in a second machinery which is based on information from a second sensor included in said second machinery, wherein said second machinery incorporates said second sensor, a second memory storing said third neural network model, and a second processor configured to execute said third neural network model; wherein said first and second memories are distinct from said at least one memory, and said first and second processors are distinct from said at least one processor; wherein said apparatus, said first machinery and said second machinery are different from each other; and wherein said at least one processor is configured to execute said intermediate neural network model, so that said intermediate neural network model is at least capable of outputting information based on either(i) inputting said first information via a first connection from said first machinery to said apparatus or (ii) inputting said second information via a second connection from said second machinery to said apparatus.
And in claim 70
wherein third information, outputted based on said first information inputted into said intermediate neural network model, is transmitted to said first machinery, and fourth information, outputted based on said second information inputted into said intermediate neural network model, is transmitted to said second machinery.
Alternatively, Claim 6
wherein the at least one first device is configured to transmit resultant data of an execution of the first part on the at least one first device to the at least one second device.
The RefDoc anticipated the boarder limitation in the instant claim limitation(s).
RefDoc teaches:
As noted in claim 2 rejection:
first machinery as claimed first device
second machinery as claimed second device
a first neural network model as first part of device
intermediate model configured to communicate data from first device to second device as noted in claim 11 and 68 limitations
in claim 68: first and second device are configured to transmit learning data, as claimed resultant data, through the intermediate model successively…
Claim 68:
wherein said first model and said intermediate neural network model and said first neural network model learn successively by error back-propagation method from said intermediate neural network model, and said intermediate neural network model and said third neural network model learn successively by error back-propagation method from said intermediate neural network model.
Claim 7:
wherein the second part of the neural network on the at least one second device is executed based on at least the resultant data of the execution of the first part on the at least one first device.
Examiner notes that the
RefDoc limitations anticipates the limitations of the current claim for the same reasons noted above in claim 6.
Claim 70
wherein third information, outputted based on said first information inputted into said intermediate neural network model, is transmitted to said first machinery, and fourth information, outputted based on said second information inputted into said intermediate neural network model, is transmitted to said second machinery.
Claim 9
wherein the at least one second device is configured to transmit another resultant data of an execution of the second part on the at least one second device to the at least one first device.
Examiner notes that the RefDoc limitations anticipates the limitations of the current claim for the same reasons noted above.
Rejection from claim 6 incorporated, and RefDoc teaches using the error back-propagation as claimed another resultant.
claim 72
wherein said second neural network model, said intermediate neural network model, and said first neural network model learn by error back-propagation method.
Claim 10
wherein a third part of the neural network is stored in the at least one memory of the at least one first device, and the at least one processor of the at least one first device is configured to execute the first third part of the neural network on the at least one first device is executed based on at least the another resultant data of the execution of the second part on the at least one second device.
Examiner notes that the RefDoc limitations anticipates the limitations of the current claim for the same reasons noted above
Intermediate neural network (NN) as claimed third part of the NN
Error back-propagation as another resultant data.
claim 72,
wherein said second neural network model, said intermediate neural network model, and said first neural network model learn by error back-propagation method.
Claim 13:
wherein the neural network to be executed by the at least one first device and the at least one second device s a neural network having been trained through a back propagation..
Examiner notes that the RefDoc limitations anticipates the limitations of the current claim as noted in claim 2 rejection and in claim 72.
claim 72,
wherein said second neural network model, said intermediate neural network model, and said first neural network model learn by error back-propagation method.
Claim 14
wherein the at least one second device is a device communicating with a plurality of the at least one first devices..
Examiner notes that the RefDoc limitations anticipates the limitations of the current claim as noted in claim 2.
Rejection from claim 6 incorporated.
first connection from said first machinery to said apparatus … via a second connection and processor for sending information, as claimed communication network.
Claims 11
Claim 16:
wherein the at least one first device and the at least one second device are installed in different apparatuses.
Examiner notes that the RefDoc limitations anticipates the limitations of the current claim as noted in claim 2.
Rejection from claim 6 incorporated.
Claim 11:
… and said first and second processors are distinct from said at least one processor; wherein said apparatus, said first machinery and said second machinery are different from each other; ..
Claim 17
wherein execution of the neural network is a process utilizing the neural network.
Examiner notes that the RefDoc limitations anticipates the limitations of the current claim as noted in claim 2.
Rejection from claim 6 incorporated.
Claim 11:
An apparatus comprising: at least one memory storing therein an intermediate neural network model; and at least one processor configured to execute …
wherein said intermediate neural network model is respectively inputted: first information based on information outputted by a first neural network model included in a first machinery which is based on information from a first sensor included in said first machinery, wherein said first machinery incorporates said first sensor, a first memory storing said first neural network model, and a first processor configured to execute said first neural network model; and second information based on information outputted by a third neural network model included in a second machinery which is based on information from a second sensor included in said second machinery, wherein said second machinery incorporates said second sensor, a second memory storing said third neural network model, and a second processor configured to execute said third neural network model; wherein said first and second memories are distinct from said at least one memory, and said first and second processors are distinct from said at least one processor; wherein said apparatus, said first machinery and said second machinery are different from each other; and wherein said at least one processor is configured to execute said intermediate neural network model, …
Claim 18
A method for execution of a neural network by at least one first device and at least one second device, the method comprising: communicating, by the first device, with the at least one second device through a communication network; executing, by the at least one first device and the at least one second device, the neural network, wherein a first part of the neural network is executed on the first device and a second part of the neural network is executed on the at least one second device, the first device transmits resultant data of an execution of the first part on the first device to the at least one second device, and the at least one second device executes the second part on the at least one second device based on at least the resultant data of the execution of the first part.
The RefDoc anticipated the boarder limitation in the instant claim limitation(s).
first machinery as claimed first device
second machinery as claimed second device
first connection from said first machinery to said apparatus … via a second connection and processor for sending information, as claimed communication network
first & second memory as claimed memory of at least one first or second device
Claim 90
A method comprising: obtaining , via a first connection from a first machinery to an intermediate learning device unit, based on information outputted by a first neural network model included in [[a]] said first machinery into an intermediate neural network model included in [[an]] said intermediate learning device unit, and obtaining , via a second connection from a second machinery to said intermediate learning device unit, based on information outputted by a third neural network model in [[a]] said second machinery into said intermediate neural network model included in said intermediate learning device unit, wherein said information outputted by said first neural network model is based on information from a first sensor included in said first machinery, wherein said first machinery incorporates said first sensor, a first memory configured to store said first neural network model, and a first processor configured to execute said first neural network model; wherein said information outputted by said third neural network model is based on information from a second sensor included in said second machinery, wherein said second machinery incorporates said second sensor, a second memory storing said third neural network model, and a second processor configured to execute said third neural network model, and wherein said intermediate neural network model is at least capable of(i) outputting said first information based on said information inputted via said first connection or (ii) outputting said second information based on said information inputted via said second connection.
Claim 19:
wherein a plurality of layers of the neural network are split into at least the first part and the second part, the first part and the second part of the neural network being different layers of the neural network.
The RefDoc anticipated the boarder limitation in the instant claim limitation(s).
Claim 90
… inputting information, via a first connection from a first machinery to an intermediate learning device unit, based on information outputted by a first neural network model included in [[a]] said first machinery into an intermediate neural network model included in [[an]] said intermediate learning device unit, and obtaining an outputted second information by inputting information, via a second connection from a second machinery to said intermediate learning device unit, based on information outputted by a third neural network model in [[a]] said second machinery into said intermediate neural network model included in said intermediate learning device unit, wherein said information outputted by said first neural network model is based on information from a first sensor included in said first machinery, wherein said first machinery incorporates said first sensor, a first memory configured to store said first neural network model, and a first processor configured to execute said first neural network model; and wherein said information outputted by said third neural network model is based on information from a second sensor included in said second machinery, wherein said second machinery incorporates said second sensor, a second memory storing said third neural network model, and a second processor configured to execute said third neural network model,..
Claim 20:
The systemaccording to claim 6, that is configured to communicate with the least a first part of the neural network; and at least one processor configured to: execute the first part of the neural network stored in the least one memory; and transmit resultant data of an execution of the first part of the neural network to the at least one second device, wherein the resultant data transmitted to the at least one second device is used by the at least one second device to execute a second part of the neural network on the at least one second device, the first part of the neural network and the second part of the neural network being different layers of the neural network.
The RefDoc anticipated the boarder limitation in the instant claim limitation(s).
first machinery as claimed first device
second machinery as claimed second device
first connection from said first machinery to said apparatus … via a second connection and processor for sending information, as claimed communication network
first & second memory as claimed memory of at least one first or second device
Rejection from claim 6 incorporated.
Claims 11
...and at least one processor configured to execute the said intermediate neural network model, wherein said intermediate neural network model is respectively inputted: first information based on information outputted by a first neural network model included in a first machinery which is based on information from a first sensor included in said first machinery, wherein said first machinery incorporates said first sensor, a first memory storing said first neural network model, and a first processor configured to execute said first neural network model; and second information based on information outputted by a third neural network model included in a second machinery which is based on information from a second sensor included in said second machinery, wherein said second machinery incorporates said second sensor, a second memory storing said third neural network model, and a second processor configured to execute said third neural network model; wherein said first and second memories are distinct from said at least one memory, and said first and second processors are distinct from said at least one processor; wherein said apparatus, said first machinery and said second machinery are different from each other; and wherein said at least one processor is configured to execute said intermediate neural network model, so that said intermediate neural network model is at least capable of outputting information based on either(i) inputting said first information via a first connection from said first machinery to said apparatus or (ii) inputting said second information via a second connection from said second machinery to said apparatus.
And in claim 70
wherein third information, outputted based on said first information inputted into said intermediate neural network model, is transmitted to said first machinery, and fourth information, outputted based on said second information inputted into said intermediate neural network model, is transmitted to said second machinery.
Claim 21
wherein a plurality of layers of the neural network are split into at least the first part and the second part, the first part and the second part of the neural network being different layers of the neural network.
The RefDoc anticipated the boarder limitation in the instant claim limitation(s).
first machinery as claimed first device
second machinery as claimed second device
first connection from said first machinery to said apparatus … via a second connection and processor for sending information, as claimed communication network
first & second memory as claimed memory of at least one first or second device
intermediate network model for transmitting between device model as claimed split.
Rejection from claim 6 incorporated.
Claims 11
...and at least one processor configured to execute the said intermediate neural network model, wherein said intermediate neural network model is respectively inputted: first information based on information outputted by a first neural network model included in a first machinery which is based on information from a first sensor included in said first machinery, wherein said first machinery incorporates said first sensor, a first memory storing said first neural network model, and a first processor configured to execute said first neural network model; and second information based on information outputted by a third neural network model included in a second machinery which is based on information from a second sensor included in said second machinery, wherein said second machinery incorporates said second sensor, a second memory storing said third neural network model, and a second processor configured to execute said third neural network model; wherein said first and second memories are distinct from said at least one memory, and said first and second processors are distinct from said at least one processor; wherein said apparatus, said first machinery and said second machinery are different from each other; and wherein said at least one processor is configured to execute said intermediate neural network model, so that said intermediate neural network model is at least capable of outputting information based on either(i) inputting said first information via a first connection from said first machinery to said apparatus or (ii) inputting said second information via a second connection from said second machinery to said apparatus.
And in claim 70
wherein third information, outputted based on said first information inputted into said intermediate neural network model, is transmitted to said first machinery, and fourth information, outputted based on said second information inputted into said intermediate neural network model, is transmitted to said second machinery.
Claim 22
wherein the first part of the neural network and the second part of the neural network are different layers of the neural network.
The RefDoc anticipated the boarder limitation in the instant claim limitation(s).
Second and fist device models as claimed different models having respective layers
Rejection from claim 6 incorporated.
Claim 11
… wherein said first machinery incorporates said first sensor, a first memory storing said first neural network model, …wherein said second machinery incorporates said second sensor, a second memory storing said third neural network model, and a second processor configured to execute said third neural network model; wherein said first and second memories are distinct from said at least one memory, and said first and second processors are distinct from said at least one processor; wherein said apparatus, said first machinery and said second machinery are different from each other; and wherein said at least one processor is configured to execute said intermediate neural network model, so that said intermediate neural network model is at least capable of outputting information based on either(i) inputting said first information via a first connection from said first machinery to said apparatus or (ii) inputting said second information via a second connection from said second machinery to said apparatus.
Claim 23
wherein the at least one device is configured to acquire sensor data to be processed by the neural network, the first part of the neural network includes an input layer of the neural network, and the second part of the neural network includes a layer later than the first part of the neural network.
The RefDoc anticipated the boarder limitation in the instant claim limitation(s).
first machinery as claimed first device
second machinery as claimed second device
successively processing learning data as noted in claim 68 through the intermediate model includes claimed later layers in instant claims
Alternatively, claim 70 information inputted into second device through intermediate model includes claimed later layers in instant claims
Rejection from claim 6 incorporated.
Claim 68
wherein said first model and said intermediate neural network model and said first neural network model learn successively by error back-propagation method from said intermediate neural network model, and said intermediate neural network model and said third neural network model learn successively by error back-propagation method from said intermediate neural network model.
And alternatively in claim 70:
wherein third information, outputted based on said first information inputted into said intermediate neural network model, is transmitted to said first machinery, and fourth information, outputted based on said second information inputted into said intermediate neural network model, is transmitted to said second machinery.
Claim 24
wherein a plurality of layers of the neural network are split into at least the first part and the second part.
The RefDoc anticipated the boarder limitation in the instant claim limitation(s).
first machinery as claimed first device
second machinery as claimed second device
successively processing learning data with split first and second model in respective first and second devices as noted in claim 68 through the intermediate model includes claimed separate models
Alternatively, claim 70 information inputted into second device through intermediate model includes claimed separate models
Rejection from claim 6 incorporated.
Claim 68
wherein said first model and said intermediate neural network model and said first neural network model learn successively by error back-propagation method from said intermediate neural network model, and said intermediate neural network model and said third neural network model learn successively by error back-propagation method from said intermediate neural network model.
And alternatively in claim 70:
wherein third information, outputted based on said first information inputted into said intermediate neural network model, is transmitted to said first machinery, and fourth information, outputted based on said second information inputted into said intermediate neural network model, is transmitted to said second machinery.
Claim 25
wherein the at least one first device and the at least one second device are at different locations.
Rejection from claim 6 incorporated.
First and second machinery as different locations
Claim 11
…wherein said first and second memories are distinct from said at least one memory, and said first and second processors are distinct from said at least one processor; wherein said apparatus, said first machinery and said second machinery are different from each other; …
Claim 26
wherein output data output from the second part of the neural network on the at least one second device by an execution of the second part of the neural network based on at least the resultant data of the execution of the first part on the at least one first device is a control command for controlling a manufacturing device or a robot.
Rejection from claim 6 incorporated.
Machinery as claimed manufacturing devices
Claim 11 and 70
Machinery can include machines and robots as noted in claim 75:
wherein each of said first machinery and second machinery is: an automobile, an airplane, a robot, an industrial machine, an environment control terminal of a chemical plant, or facility horticulture.
Claim 27
wherein output data, output from the third part of the neural network on the at least one first device by an execution of the third part of the neural network based on at least the another resultant data of the execution of the second part on the at least one second device, is a control command for controlling a manufacturing device or a robot.
Rejection from claim 9 incorporated.
Machinery as claimed manufacturing devices
Claim 11 and 72
Machinery can include machines and robots as noted in claim 75:
wherein each of said first machinery and second machinery is: an automobile, an airplane, a robot, an industrial machine, an environment control terminal of a chemical plant, or facility horticulture.
Claims 8, 11-12 and 15 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 2, 6-7, 9, 68, 70 and 72 of U.S. Patent No. 11475289, hereinafter ‘RefDoc’ in view of Kasabov et al. (US 20030149676, hereinafter ‘Kasa’).
Regarding claim 8, RefDoc teaches transmitting the resulted data as noted in the table above for the claim 6 and 7 rejections above. RefDoc does not expressly disclose the use of a vector data type.
Kasa does expressly disclose the use of a vector data type, in [0037] FIG. 2 illustrates the computer-implemented aspects of the invention stored in memory 6 and/or mass storage 14 and arranged to operate with processor 4. The preferred system is arranged as an evolving connectionist system 20. The system 20 is provided with one or more neural network modules or NNM 22 [… a characteristic vector result from the execution of the first part on the at least one first device]. The arrangement and operation of the neural network module(s) 22 forms the basis of the invention and will be further described below… [0043] The neural network module 22 further comprises rule base layer 48 having one or more rule nodes 50. Each rule node 50 is defined by two vectors of connection weights W1(r) and W2(r) [… a characteristic vector result from the execution of the first part on the at least one first device]. Connection weight W1(r) is preferably adjusted through unsupervised learning based on similarity measure within a local area of the problem space. W2(r), on the other hand, is preferably adjusted through supervised learning based on output error, or on reinforcement learning based on output hints. Connection weights W1(r) and W2(r) are further described below.
Kasa and RefDoc are analogous art because both involve developing information retrieval and processing techniques using machine learning systems and algorithms.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art developing information retrieval and processing techniques using neural network module forming part of an adaptive learning system based on neural network models as disclosed by Kasa with the method of developing information retrieval and processing techniques using neural network models as disclosed by RefDoc.
One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Kasa and RefDoc as noted above. Doing so allow for developing and implementing adaptive learning systems able to learn quickly from a large amount of data, adapt incrementally in an on-line mode, have an open structure so as to allow dynamic creation of new modules, memories information that can be used later, (Kasa, Abstract & 0003).
Regarding claim 11, RefDoc teaches transmitting the resulted data as noted in the table above for the claim 2 and 9 rejections above. RefDoc does not expressly disclose the use of a vector data type.
Kasa does expressly disclose the use of a vector data type, in [0037] FIG. 2 illustrates the computer-implemented aspects of the invention stored in memory 6 and/or mass storage 14 and arranged to operate with processor 4. The preferred system is arranged as an evolving connectionist system 20. The system 20 is provided with one or more neural network modules or NNM 22 [… a characteristic vector result from the execution of the second part on the at least one second device]. The arrangement and operation of the neural network module(s) 22 forms the basis of the invention and will be further described below… [0043] The neural network module 22 further comprises rule base layer 48 having one or more rule nodes 50. Each rule node 50 is defined by two vectors of connection weights W1(r) and W2(r) [… a characteristic vector result from the execution of the second part on the at least one second device.]. Connection weight W1(r) is preferably adjusted through unsupervised learning based on similarity measure within a local area of the problem space. W2(r), on the other hand, is preferably adjusted through supervised learning based on output error, or on reinforcement learning based on output hints. Connection weights W1(r) and W2(r) are further described below.
Kasa and RefDoc are analogous art because both involve developing information retrieval and processing techniques using machine learning systems and algorithms.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art developing information retrieval and processing techniques using neural network module forming part of an adaptive learning system based on neural network models as disclosed by Kasa with the method of developing information retrieval and processing techniques using a neural network model as disclosed by RefDoc.
One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Kasa and RefDoc as noted above. Doing so allow for developing and implementing adaptive learning systems able to learn quickly from a large amount of data, adapt incrementally in an on-line mode, have an open structure so as to allow dynamic creation of new modules, memories information that can be used later, (Kasa, Abstract & 0003).
Regarding claim 12, RefDoc teaches processing neural network models including layers data as noted in the table above for the claim 2 rejections above. RefDoc does not expressly disclose the use of weights.
Kasa does expressly disclose the use of weights, in [0037] FIG. 2 illustrates the computer-implemented aspects of the invention stored in memory 6 and/or mass storage 14 and arranged to operate with processor 4. The preferred system is arranged as an evolving connectionist system 20. The system 20 is provided with one or more neural network modules or NNM 22 […group of weights associated with layers]. The arrangement and operation of the neural network module(s) 22 forms the basis of the invention and will be further described below… [0043] The neural network module 22 further comprises rule base layer 48 having one or more rule nodes 50 [… group of weights associated with layers]. Each rule node 50 is defined by two vectors of connection weights W1(r) and W2(r) [… group of weights associated with layers]. Connection weight W1(r) is preferably adjusted through unsupervised learning based on similarity measure within a local area of the problem space. W2(r), on the other hand, is preferably adjusted through supervised learning based on output error, or on reinforcement learning based on output hints. Connection weights W1(r) and W2(r) are further described below.
Kasa and RefDoc are analogous art because both involve developing information retrieval and processing techniques using machine learning systems and algorithms.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art developing information retrieval and processing techniques using neural network module forming part of an adaptive learning system based on neural network models as disclosed by Kasa with the method of developing information retrieval and processing techniques using a neural network model as disclosed by RefDoc.
One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Kasa and RefDoc as noted above. Doing so allow for developing and implementing adaptive learning systems able to learn quickly from a large amount of data, adapt incrementally in an on-line mode, have an open structure so as to allow dynamic creation of new modules, memories information that can be used later, (Kasa, Abstract & 0003).
Regarding claim 15, RefDoc teaches processing neural network models as noted in the table above for the claim 2. RefDoc does not expressly disclose wherein the at least one first device is one of a personal computer, a tablet, a portable phone, a smartphone, a portable information terminal, or a touch pad.
Kasa does expressly disclose wherein the at least one first device is one of a personal computer, a tablet, a portable phone, a smartphone, a portable information terminal, or a touch pad. in [0143] Possible applications of the invention include adaptive speech recognition in a noisy environment, adaptive spoken language evolving systems, adaptive process control, adaptive robot control, adaptive knowledge based systems for learning genetic information, adaptive agents on the Internet, adaptive systems for on-line decision making on financial and economic data, adaptive automatic vehicle driving systems [a portable information terminal] that learn to navigate in a new environment (cars, helicopters, etc), and classifying bio-infomatic data.
Kasa and RefDoc are analogous art because both involve developing information retrieval and processing techniques using machine learning systems and algorithms.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art developing information retrieval and processing techniques using neural network module forming part of an adaptive learning system based on neural network models as disclosed by Kasa with the method of developing information retrieval and processing techniques using a neural network model as disclosed by RefDoc.
One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Kasa and RefDoc as noted above. Doing so allow for developing and implementing adaptive learning systems able to learn quickly from a large amount of data, adapt incrementally in an on-line mode, have an open structure so as to allow dynamic creation of new modules, memories information that can be used later, (Kasa, Abstract & 0003).
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2, 6-17, 20-27 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claims 2, 6-10, 14, 20, 23, the claim limitations noted above invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Specifically, the limitations recites functional language without corresponding structure (e.g., the computer/hardware computing element and the algorithm). For computer-implemented functional claim limitations invoking claim interpretation under 35 USC 112-f, the applicant’s specification must clearly link or associate the recited generic place holder to the corresponding structure performing the claimed functions such that one skilled in the art could identify the structure, material or acts from that description as being adequate to perform the claimed function, as required under 35 U.S.C. 112(b) and (f). See MPEP 2181-III. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Regarding the dependent claims of claim 2, the claims fail to resolve the noted deficiency and are thus rejected under the same rationale noted above.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Regarding claim 27, the limitation term “the third part of the neural network on the at least one first device” there is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim 1 is rejected under 35 U.S.C. 102(a)(1) and 102 (a)(2) as being anticipated by Hare et al. (US 20160306725, hereinafter ‘Hare’).
Regarding independent claim 1, Hare teaches a system for execution of a neural network, comprising: (in [0004] Embodiments include a method, system, and computer program product for accumulating sensor data from a plurality of sensor utilizing a physics based model containing differential equations that describe components and sub-systems within a complex networked system; selecting, by an algorithm, a sub-set of best sensors to capture effects of each failure mode from a plurality of sensors, each sensor being associated with at least one of the components and the sub-systems within the complex networked system; training a plurality of neural networks for each subsystem and component within the complex networked system to detect and identify faults within the sensor data; and in response to the sub-set of best sensors being selected and the plurality of neural networks being trained for each subsystem and component, executing the algorithm to detect and isolate the faults within the sensor data.)
at least one first device and at least one second device configured to: communicate with each other through a communication network; and execute the neural network, wherein at least one processor of the at least one first device is configured to execute a first part of the neural network stored in at least one memory of the at least one first device, and at least one processor of the at least one second device is configured to execute a second part of the neural network stored in at least one memory of the at least one second device. (in [0028] FIG. 3 also includes a plurality of neural networks, each of which being associated with a particular node. For instance, neural networks 372 and 373 are respectively associated with the environmental control sub-systems 301 and 302 [at least one first device and at least one second device configured to: communicate with each other through a communication network, wherein at least one processor of the at least one first device is configured to execute a first part of the neural network stored in at least one memory of the at least one first device, and at least one processor of the at least one second device is configured to execute a second part of the neural network stored in at least one memory of the at least one second device.], while neural networks 374-379 [alternatively at least one first device and at least one second device configured to: communicate with each other through a communication network; wherein at least one processor of the at least one first device is configured to execute a first part of the neural network stored in at least one memory of the at least one first device, and at least one processor of the at least one second device is configured to execute a second part of the neural network stored in at least one memory of the at least one second device] are associated with the second heat exchanger 220, the air cycle machine 240, the sensor G, the second heat exchanger 320, the air cycle machine 340, and the sensor G1, respectively. That is, each neural network 372-379 can be a black box model constructed at each node of a tree for binary classification of that node as healthy or faulty. The neural networks 372-379 also provide ease of implementation in real-time applications. Each neural network 372-379 utilizes a model that is trained via data received from a selected set of readings from one or more of the sensors (e.g., A-H) to detect and isolate a faulty node. They are trained using, for example, data generated while the particular component is healthy. This data can also include scenarios when all of the other components in the air management system are faulty as well capture the behavior of the healthy component under off nominal input conditions. In this way, each neural network 372-379 reduces the computational complexity by eliminating healthy branches of the tree to reduce false alarms through subsystem and facilitate component isolation.
Examiner notes that the recited networks are considered an claimed neural network segmented through connected nodes of a single neural network model, as noted above)
Claims 2, 6-12, 14-27 are rejected under 35 U.S.C. 102(a)(1) and 102 (a)(2) as being anticipated by Sinyavskiy et al. (US 20130325768, hereinafter ‘Sin’).
Regarding independent claim 2, Sin teaches a system for execution of a neural network, comprising: (in [0185] FIGS. 6A-6B illustrate exemplary implementations of reconfigurable partitioned neural network apparatus comprising generalized learning framework, described above. The network 600 of FIG. 6A may comprise several partitions 610, 620, 630, comprising one or more of nodes 602 receiving inputs 612 {X} via connections 604, and providing outputs via connections 608. And in [0278] Generalized learning methodology described herein may enable different parts of the same network to implement different adaptive tasks (as described above with respect to FIGS. 5B-5C). The end user of the adaptive device may be enabled to partition network into different parts, connect these parts appropriately, and assign cost functions to each task (e.g., selecting them from predefined set of rules or implementing a custom rule)...)
at least one first device and at least one second device configured to: communicate with each other through a communication network; and execute the neural network, wherein at least one processor of the at least one first device is configured to execute a first part of the neural network stored in at least one memory of the at least one first device, and at least one processor of the at least one second device is configured to execute a second part of the neural network stored in at least one memory of the at least one second device. (in As depicted in Fig. 6A-B and in [0185] FIGS. 6A-6B illustrate exemplary implementations of reconfigurable partitioned neural network apparatus comprising generalized learning framework, described above. The network 600 of FIG. 6A may comprise several partitions [at least one first device and at least one second device as one of several partitions] 610, 620, 630, comprising one or more of nodes 602 receiving inputs 612 {X} via connections 604, and providing outputs via connections 608… [0188] Different partitions of the network 600 may be configured, in some implementations, to perform specialized functionality [at least one first device and at least one second device as one of several partitions]. By way of example, the partition 610 may adapt raw sensory input of a robotic apparatus to internal format of the network (e.g., convert analog signal representation to spiking) using for example, methodology described in U.S. patent application Ser. No. 13/314,066, filed Dec. 7, 2001, entitled "NEURAL NETWORK APPARATUS AND METHODS FOR SIGNAL CONVERSION", incorporated herein by reference in its entirety. The output {Y1} of the partition 610 may be forwarded to other partitions, for example, partitions 620, 630, as illustrated by the broken line arrows 618, 618_1 in FIG. 6A. The partition 620 may implement visual object recognition learning that may require training input signal y.sup.d.sub.j(t) 616, such as for example an object template and/or a class designation (friend/foe). The output {Y2}) of the partition 620 may be forwarded to another partition (e.g., partition 630) as illustrated by the dashed line arrow 628 in FIG. 6A. The partition 630 may implement motor control commands required for the robotic arm to reach and grasp the identified object, or motor commands configured to move robot or camera to a new location, which may require reinforcement signal r(t) 614. The partition 630 may generate the output {Y} 638 of the network 600 implementing adaptive controller apparatus (e.g., the apparatus 520 of FIG. 5). The homogeneous configuration of the network 600, illustrated in FIG. 6A, may enable a single network comprising several generalized nodes of the same type to implement different learning tasks (e.g., reinforcement and supervised) simultaneously.
PNG
media_image1.png
700
520
media_image1.png
Greyscale
)
Regarding claim 6, the rejection of claim 2 is incorporated and Sin further teaches the system according to claim 2, wherein the at least one first device is configured to transmit resultant data of an execution of the first part on the at least one first device to the at least one second device. (in [0188] Different partitions of the network 600 may be configured, in some implementations, to perform specialized functionality. By way of example, the partition 610 may adapt raw sensory input of a robotic apparatus to internal format of the network (e.g., convert analog signal representation to spiking) using for example, methodology described in U.S. patent application Ser. No. 13/314,066, filed Dec. 7, 2001, entitled "NEURAL NETWORK APPARATUS AND METHODS FOR SIGNAL CONVERSION", incorporated herein by reference in its entirety. The output {Y1} of the partition 610 may be forwarded to other partitions [wherein the at least one first device is configured to transmit resultant data of an execution of the first part on the at least one first device to the at least one second device], for example, partitions 620, 630, as illustrated by the broken line arrows 618, 618_1 in FIG. 6A. The partition 620 may implement visual object recognition learning that may require training input signal y.sup.d.sub.j(t) 616, such as for example an object template and/or a class designation (friend/foe). The output {Y2}) of the partition 620 may be forwarded to another partition [wherein the at least one first device is configured to transmit resultant data of an execution of the first part on the at least one first device to the at least one second device] (e.g., partition 630) as illustrated by the dashed line arrow 628 in FIG. 6A…)
Regarding claim 7, the rejection of claim 6 is incorporated and Sin further teaches the system according to claim 6, wherein the second part of the neural network on the at least one second device is executed based on at least the resultant data of the execution of the first part on the at least one first device. (in [0188] Different partitions of the network 600 may be configured, in some implementations, to perform specialized functionality. By way of example, the partition 610 may adapt raw sensory input of a robotic apparatus to internal format of the network (e.g., convert analog signal representation to spiking) using for example, methodology described in U.S. patent application Ser. No. 13/314,066, filed Dec. 7, 2001, entitled "NEURAL NETWORK APPARATUS AND METHODS FOR SIGNAL CONVERSION", incorporated herein by reference in its entirety. The output {Y1} [wherein the second part of the neural network on the at least one second device is executed based on at least the resultant data of the execution of the first part on the at least one first device] of the partition 610 may be forwarded to other partitions, for example, partitions 620, 630, as illustrated by the broken line arrows 618, 618_1 in FIG. 6A. The partition 620 may implement visual object recognition learning that may require training input signal y.sup.d.sub.j(t) 616, such as for example an object template and/or a class designation (friend/foe). The output {Y2}) [wherein the second part of the neural network on the at least one second device is executed based on at least the resultant data of the execution of the first part on the at least one first device] of the partition 620 may be forwarded to another partition (e.g., partition 630) as illustrated by the dashed line arrow 628 in FIG. 6A…)
Regarding claim 8, the rejection of claim 6 is incorporated and Sin further teaches the system according to claim 6, wherein the resultant data transmitted to the at least one second device is a characteristic vector result from the execution of the first part on the at least one first device. (in [0188] Different partitions of the network 600 may be configured, in some implementations, to perform specialized functionality. By way of example, the partition 610 may adapt raw sensory input of a robotic apparatus to internal format of the network (e.g., convert analog signal representation to spiking) using for example, methodology described in U.S. patent application Ser. No. 13/314,066, filed Dec. 7, 2001, entitled "NEURAL NETWORK APPARATUS AND METHODS FOR SIGNAL CONVERSION", incorporated herein by reference in its entirety. The output {Y1} [wherein the resultant data transmitted to the at least one second device is a characteristic vector result from the execution of the first part on the at least one first device] of the partition 610 may be forwarded to other partitions, for example, partitions 620, 630, as illustrated by the broken line arrows 618, 618_1 in FIG. 6A. The partition 620 may implement visual object recognition learning that may require training input signal y.sup.d.sub.j(t) 616, such as for example an object template and/or a class designation (friend/foe). The output {Y2}) [wherein the resultant data transmitted to the at least one second device is a characteristic vector result from the execution of the first part on the at least one first device] of the partition 620 may be forwarded to another partition (e.g., partition 630) as illustrated by the dashed line arrow 628 in FIG. 6A…; Examiner notes that the neural network processes vectors, in [0021] Learning rules used with spiking neuron networks may be typically expressed in terms of original spike trains instead of their secondary features (e.g., the rate or the latency from the last spike). The result is that a spiking neuron operates on spike train space, transforming a vector of spike trains (input spike trains) into single element of that space (output train)… ;
And in incorporated reference, in [0188] US Application No. 13/314,066 (i.e. US 20130151450, hereinafter ‘InCorpPon’), in [0010] The complexity of real neurons is highly abstracted when modeling artificial neurons. A schematic diagram of an artificial neuron is illustrated in FIG. 1. The model comprises a vector of inputs x=[x.sub.1, x.sub.2 . . . , x.sub.n].sup.T, a vector of weights w=[w.sub.1, . . . w.sub.n] (weights define the strength of the respective signals), and a mathematical function which determines the activation of the neuron's output y. The activation function may have various forms. In the simplest neuron models, the activation function is a linear function and the neuron output is calculated as:… [0136] In order to quantitatively evaluate the performance of learning, two distance measures are used. For analog signal outputs, the mean square error (MSE) between the target and output vectors is computed. )
Regarding claim 9, the rejection of claim 6 is incorporated and Sin further teaches the system according to claim 6, wherein the at least one second device is configured to transmit another resultant data of an execution of the second part on the at least one second device to the at least one first device. (in [0188] Different partitions of the network 600 may be configured, in some implementations, to perform specialized functionality. By way of example, the partition 610 may adapt raw sensory input of a robotic apparatus to internal format of the network (e.g., convert analog signal representation to spiking) using for example, methodology described in U.S. patent application Ser. No. 13/314,066, filed Dec. 7, 2001, entitled "NEURAL NETWORK APPARATUS AND METHODS FOR SIGNAL CONVERSION", incorporated herein by reference in its entirety. The output {Y1} of the partition 610 may be forwarded to other partitions, for example, partitions 620, 630, as illustrated by the broken line arrows 618, 618_1 in FIG. 6A. The partition 620 may implement visual object recognition learning that may require training input signal y.sup.d.sub.j(t) 616, such as for example an object template and/or a class designation (friend/foe). The output {Y2}) of the partition 620 may be forwarded to another partition (e.g., partition 630) as illustrated by the dashed line arrow 628 in FIG. 6A. The partition 630 may implement motor control commands required for the robotic arm to reach and grasp the identified object, or motor commands configured to move robot or camera to a new location, which may require reinforcement signal r(t) 614 [wherein the at least one second device is configured to transmit another resultant data of an execution of the second part on the at least one second device to the at least one first device]...; And in [0190] The homogeneous nature of the network 600 may enable dynamic reconfiguration of the network during its operation. FIG. 6B illustrates one exemplary implementation of network reconfiguration in accordance with the disclosure. The network 640 may comprise partition 650, which may be configured to perform unsupervised learning task, and partition 660, which may be configured to implement supervised and reinforcement learning simultaneously [wherein the at least one second device is configured to transmit another resultant data of an execution of the second part on the at least one second device to the at least one first device]. The network configuration of FIG. 6B may be used to perform signal separation tasks by the partition 650 and signal classification tasks by the partition 660. The partition 650 may be operated according to unsupervised learning rule and may generate output {Y3} denoted by the arrow 658 in FIG. 6B. The partition 660 may be operated according to a combined reinforcement and supervised rule, may receive supervised and reinforcement input 656 [wherein the at least one second device is configured to transmit another resultant data of an execution of the second part on the at least one second device to the at least one first device], and/or may generate the output {Y4} 668.)
Additionally, as incorporated [0001] : U.S. patent application Ser. No. 13/XXX,XXX entitled "DYNAMICALLY RECONFIGURABLE STOCHASTIC SPIKING NETWORK APPARATUS AND METHODS", US Pub No. US 20130325775), hereinafter ‘InCorpSin’, teaches in sending two singles from a second device partition to a first device partition as depicted in 6C, and in [0159] The partition 690 may be configured to receive the output 688 [wherein the at least one second device is configured to transmit another resultant data of an execution of the second part on the at least one second device to the at least one first device] of the partition 680 and to further process it (e.g., perform adaptive control) via a combination of reinforcement and supervised learning. In one or more implementations, the learning rule employed by the partition 690 may comprise a hybrid learning rule. The hybrid learning rule may comprise reinforcement and supervised learning combination, as described, for example, by Eqn. 34 below. Operation of the partition 690 during learning in this implementation may be aided by teaching signal 694 r(t) [wherein the at least one second device is configured to transmit another resultant data of an execution of the second part on the at least one second device to the at least one first device]. The teaching signal 694 r(t) may comprise (1) supervisory signal y.sup.d(t), which may provide, for example, desired locations (waypoints) for an autonomous robotic apparatus; and (2) reinforcement signal r(t), which may provide, for example, how close the apparatus navigates with respect to these waypoints.
PNG
media_image2.png
682
530
media_image2.png
Greyscale
)
Regarding claim 10, the rejection of claim 9 is incorporated and Sin further teaches the system according to claim 9, wherein a third part of the neural network is stored in the at least one memory of the at least one first device, and the at least one processor of the at least one first device is configured to execute the third part of the neural network on the at least one first device. (As depicted InCorpSin Fig. 6C and in [0159] The partition 690 [wherein a third part of the neural network is stored in the at least one memory of the at least one first device, … as node part of the first device in Fig.6C: 690] may be configured to receive the output 688 of the partition 680 and to further process it (e.g., perform adaptive control) via a combination of reinforcement and supervised learning. In one or more implementations, the learning rule employed by the partition 690 may comprise a hybrid learning rule. The hybrid learning rule may comprise reinforcement and supervised learning combination, as described, for example, by Eqn. 34 below. Operation of the partition 690 during learning in this implementation may be aided by teaching signal 694 r(t). The teaching signal 694 r(t) may comprise (1) supervisory signal y.sup.d(t), which may provide, for example, desired locations (waypoints) for an autonomous robotic apparatus; and (2) reinforcement signal r(t) [and the at least one processor of the at least one first device is configured to execute the third part of the neural network on the at least one first device], which may provide, for example, how close the apparatus navigates with respect to these waypoints. [0160] The dynamic network learning reconfiguration illustrated in FIGS. 6A-6C may be used, for example, in an autonomous robotic apparatus performing exploration tasks (e.g., a pipeline inspection autonomous underwater vehicle (AUV), or space rover, explosive detection, and/or mine exploration). When certain functionality of the robot is not required (e.g., the arm manipulation function) the available network resources (i.e., the nodes 602) [and the at least one processor of the at least one first device is configured to execute the third part of the neural network on the at least one first device …] may be reassigned to perform different tasks. Such reuse of network resources may be traded for (i) smaller network processing apparatus, having lower cost, size and consuming less power, as compared to a fixed pre-determined configuration; and/or (ii) increased processing capability for the same network capacity. )
Regarding claim 11, the rejection of claim 9 is incorporated and Sin further teaches the system according to claim 9, wherein the another resultant data transmitted to the at least one first device is a characteristic vector result from the execution of the second part on the at least one second device. ; Examiner notes that the neural network processes vectors, in [0021] Learning rules used with spiking neuron networks may be typically expressed in terms of original spike trains instead of their secondary features (e.g., the rate or the latency from the last spike). The result is that a spiking neuron operates on spike train space, transforming a vector [wherein the another resultant data transmitted to the at least one first device is a characteristic vector result from the execution of the second part on the at least one second device] of spike trains (input spike trains) into single element of that space (output train)… ;
And in incorporated reference, noted in Sin [0188]: InCorpPon (US 20130151450) in [0010] The complexity of real neurons is highly abstracted when modeling artificial neurons. A schematic diagram of an artificial neuron is illustrated in FIG. 1. The model comprises a vector of inputs x=[x.sub.1, x.sub.2 . . . , x.sub.n].sup.T, a vector of weights w=[w.sub.1, . . . w.sub.n] (weights define the strength of the respective signals), and a mathematical function which determines the activation of the neuron's output y [wherein the another resultant data transmitted to the at least one first device is a characteristic vector result from the execution of the second part on the at least one second device]. The activation function may have various forms. In the simplest neuron models, the activation function is a linear function and the neuron output is calculated as:… [0136] In order to quantitatively evaluate the performance of learning, two distance measures are used. For analog signal outputs, the mean square error (MSE) between the target and output vectors [wherein the another resultant data transmitted to the at least one first device is a characteristic vector result from the execution of the second part on the at least one second device] is computed. )
Regarding claim 12, the rejection of claim 6 is incorporated and Sin further teaches the system according to claim 6, wherein the first part includes a first group of weights associated with layers of the first part, and the second part includes a second group of weights associated with layers of the second part. (in [0186] In one or more implementations, the nodes 602 of the network 600 may comprise spiking neurons [wherein the first part includes a first group of weights associated with layers of the first part, and the second part includes a second group of weights associated with layers of the second part] (e.g., the neurons 730 of FIG. 9, described below), the connections 604, 608 may be configured to carry spiking input into neurons, and spiking output from the neurons, respectively.... And in [0008] When the task changes, the learning rules (typically effected by adjusting the control parameters w={w.sub.i, w.sub.2, . . . , w.sub.n}) [wherein the first part includes a first group of weights associated with layers of the first part, and the second part includes a second group of weights associated with layers of the second part] may need to be modified to suit the new task. Hereinafter, the boldface variables and symbols with arrow superscripts denote vector quantities, unless specified otherwise. Complex control applications, such as for example, autonomous robot navigation, robotic object manipulation, and/or other applications may require simultaneous implementation of a broad range of learning tasks. Such tasks may include visual recognition of surroundings, motion control, object (face) recognition, object manipulation, and/or other tasks. In order to handle these tasks simultaneously, existing implementations may rely on a partitioning approach [wherein the first part includes a first group of weights associated with layers of the first part, and the second part includes a second group of weights associated with layers of the second part], where individual tasks are implemented using separate controllers, each implementing its own learning rule (e.g., supervised, unsupervised, reinforcement)… [0012] Even when a neural network is used as the computational engine for these learning tasks, individual tasks may be performed by a separate network partition that implements a task-specific set of learning rules (e.g., adaptive control, classification, recognition, prediction rules, and/or other rules)… [0107] One or more generalized learning methodologies described herein may enable different parts of the same network to implement different adaptive tasks. The end user of the adaptive device may be enabled to partition network into different parts, connect these parts appropriately, and assign cost functions to each task (e.g., selecting them from predefined set of rules or implementing a custom rule)…
Examiner notes that Sin teaches the neurons/nodes associated with claim weight for performing partitioned tasks of a neural network. Where the claimed portions including a plurality of layers as noted above and in [0185] FIGS. 6A-6B illustrate exemplary implementations of reconfigurable partitioned neural network [wherein the first part includes a first group of weights associated with layers of the first part, and the second part includes a second group of weights associated with layers of the second part] apparatus comprising generalized learning framework, described above. The network 600 of FIG. 6A may comprise several partitions 610, 620, 630, comprising one or more of nodes 602 receiving inputs 612 {X} via connections 604, and providing outputs via connections 608. [0186] In one or more implementations, the nodes 602 of the network 600 may comprise spiking neurons (e.g., the neurons 730 of FIG. 9, described below), the connections 604, 608 may be configured to carry spiking input into neurons, and spiking output from the neurons, respectively. The neurons 602 may be configured to generate responses (as described in, for example, U.S. patent application Ser. No. 13/152,105 filed on Jun. 2, 2011, and entitled "APPARATUS AND METHODS FOR TEMPORALLY PROXIMATE OBJECT RECOGNITION", incorporated by reference herein in its entirety) which may be propagated via feed-forward connections 608. [0187] In some implementations, the network 600 may comprise artificial neurons, such as for example, spiking neurons described by U.S. patent application Ser. No. 13/152,105 filed on Jun. 2, 2011, and entitled "APPARATUS AND METHODS FOR TEMPORALLY PROXIMATE OBJECT RECOGNITION", incorporated supra, artificial neurons with sigmoidal activation function, binary neurons (perceptron), radial basis function units, and/or fuzzy logic networks…
Additionally, the incorporated references teach the connections associated with a plurality of layers, [0188] Different partitions of the network 600 may be configured, in some implementations, to perform specialized functionality [wherein the first part includes a first group of weights associated with layers of the first part, and the second part includes a second group of weights associated with layers of the second part]…The homogeneous configuration of the network 600, illustrated in FIG. 6A, may enable a single network comprising several generalized nodes of the same type to implement different learning tasks (e.g., reinforcement and supervised) simultaneously.
[0185] FIGS. 6A-6B illustrate exemplary implementations of reconfigurable partitioned neural network apparatus comprising generalized learning framework, described above. The network 600 of FIG. 6A may comprise several partitions 610 620 [and the second part includes a second group of weights associated with layers of the second part, having processing layer 630 and segment portion of the input layer 614 connected to processing layer 630], 630 [wherein the first part includes a first group of weights associated with layers of the first part], comprising one or more of nodes 602 receiving inputs 612 {X} via connections 604 [wherein the first part includes a first group of weights associated with layers of the first part, having processing layer 620 and segment portion of the input layer 614 connected to processing layer 620], and providing outputs via connections 608… [0188] Different partitions of the network 600 may be configured, in some implementations, to perform specialized functionality. By way of example, the partition 610 may adapt raw sensory input of a robotic apparatus to internal format of the network (e.g., convert analog signal representation to spiking) using for example, methodology described in U.S. patent application Ser. No. 13/314,066, filed Dec. 7, 2001, entitled "NEURAL NETWORK APPARATUS AND METHODS FOR SIGNAL CONVERSION", incorporated herein by reference in its entirety. The output {Y1} of the partition 610 may be forwarded to other partitions [wherein the at least one first device is configured to transmit resultant data of an execution of the first part on the at least one first device to the at least one second device], for example, partitions 620, 630, as illustrated by the broken line arrows 618, 618_1 in FIG. 6A. The partition 620 may implement visual object recognition learning that may require training input signal y.sup.d.sub.j(t) 616, such as for example an object template and/or a class designation (friend/foe). The output {Y2}) of the partition 620 may be forwarded to another partition [wherein the at least one first device is configured to transmit resultant data of an execution of the first part on the at least one first device to the at least one second device] (e.g., partition 630) as illustrated by the dashed line arrow 628 in FIG. 6A…)
Regarding claim 14, the rejection of claim 6 is incorporated and Sin further teaches the system according to claim 6, wherein the at least one second device is a device communicating with a plurality of the at least one first devices. (in [0188] Different partitions of the network 600 may be configured, in some implementations, to perform specialized functionality. By way of example, the partition 610 may adapt raw sensory input of a robotic apparatus to internal format of the network (e.g., convert analog signal representation to spiking) using for example, methodology described in U.S. patent application Ser. No. 13/314,066, filed Dec. 7, 2001, entitled "NEURAL NETWORK APPARATUS AND METHODS FOR SIGNAL CONVERSION", incorporated herein by reference in its entirety. The output {Y1} of the partition 610 may be forwarded to other partitions, for example, partitions 620, 630 [wherein the at least one second device is a device communicating with a plurality of the at least one first devices], as illustrated by the broken line arrows 618, 618_1 in FIG. 6A. The partition 620 may implement visual object recognition learning that may require training input signal y.sup.d.sub.j(t) 616, such as for example an object template and/or a class designation (friend/foe). The output {Y2}) of the partition 620 may be forwarded to another partition (e.g., partition 630) as illustrated by the dashed line arrow 628 in FIG. 6A…)
Regarding claim 15, the rejection of claim 6 is incorporated and Sin further teaches the system according to claim 6, wherein the at least one first device is one of a personal computer, a tablet, a portable phone, a smartphone, a portable information terminal, or a touch pad. (in [0107] One or more generalized learning methodologies described herein may enable different parts of the same network to implement different adaptive tasks. The end user of the adaptive device may be enabled to partition network into different parts, connect these parts appropriately, and assign cost functions to each task (e.g., selecting them from predefined set of rules or implementing a custom rule)… [0109] Implementations of the disclosure may be, for example, deployed in a hardware and/or software implementation of a neuromorphic computer system. In some implementations, a robotic system [wherein the at least one first device is one of a .. portable information terminal …] may include a processor embodied in an application specific integrated circuit, which can be adapted or configured for use in an embedded application (e.g., a prosthetic device)…; And in [0281] Advantageously, the present disclosure can be used to simplify and improve control tasks for a wide assortment of control applications including, without limitation, industrial control, adaptive signal processing, navigation, and robotics. Exemplary implementations of the present disclosure may be useful in a variety of devices including without limitation prosthetic devices (such as artificial limbs), industrial control, autonomous and robotic apparatus, HVAC, and other electromechanical devices requiring accurate stabilization, set-point control, trajectory tracking functionality or other types of control. Examples of such robotic devices may include manufacturing robots (e.g., automotive), military devices, and medical devices (e.g., for surgical robots). Examples of autonomous navigation may include rovers (e.g., for extraterrestrial, underwater, hazardous exploration environment), unmanned air vehicles, underwater vehicles, smart appliances (e.g., ROOMBA.RTM.) [wherein the at least one first device is one of a personal computer, a tablet, a portable phone, a smartphone, a portable information terminal, or a touch pad], and/or robotic toys. The present disclosure can advantageously be used in other applications of adaptive signal processing systems (comprising for example, artificial neural networks), including: machine vision, pattern detection and pattern recognition, object classification, signal filtering, data segmentation, data compression, data mining, optimization and scheduling, complex mapping, and/or other applications.)
Regarding claim 16, the rejection of claim 6 is incorporated and Sin further teaches the system according to claim 6, wherein the at least one first device and the at least one second device are installed in different apparatuses. (in ([0188] Different partitions of the network 600 may be configured, in some implementations, to perform specialized functionality. By way of example, the partition 610 may adapt raw sensory input of a robotic apparatus to internal format of the network (e.g., convert analog signal representation to spiking) using for example, methodology described in U.S. patent application Ser. No. 13/314,066, filed Dec. 7, 2001, entitled "NEURAL NETWORK APPARATUS AND METHODS FOR SIGNAL CONVERSION", incorporated herein by reference in its entirety. The output {Y1} of the partition 610 may be forwarded to other partitions [wherein the at least one first device and the at least one second device are installed in different apparatuses], for example, partitions 620, 630, as illustrated by the broken line arrows 618, 618_1 in FIG. 6A. The partition 620 may implement visual object recognition learning that may require training input signal y.sup.d.sub.j(t) 616, such as for example an object template and/or a class designation (friend/foe). The output {Y2}) of the partition 620 may be forwarded to another partition [wherein the at least one first device and the at least one second device are installed in different apparatuses] (e.g., partition 630) as illustrated by the dashed line arrow 628 in FIG. 6A…; Patritions as different label devices having different task executed on each respective circuitry as depicted inf Figs 6A, and in [0281] Advantageously, the present disclosure can be used to simplify and improve control tasks for a wide assortment of control applications including, without limitation, industrial control, adaptive signal processing, navigation, and robotics [wherein the at least one first device and the at least one second device are installed in different apparatuses]. Exemplary implementations of the present disclosure may be useful in a variety of devices including without limitation prosthetic devices (such as artificial limbs), industrial control, autonomous and robotic apparatus, HVAC, and other electromechanical devices requiring accurate stabilization, set-point control, trajectory tracking functionality or other types of control. Examples of such robotic devices may include manufacturing robots (e.g., automotive), military devices, and medical devices (e.g., for surgical robots) [wherein the at least one first device and the at least one second device are installed in different apparatuses]. Examples of autonomous navigation may include rovers [wherein the at least one first device and the at least one second device are installed in different apparatuses] (e.g., for extraterrestrial, underwater, hazardous exploration environment), unmanned air vehicles, underwater vehicles, smart appliances (e.g., ROOMBA.RTM.), and/or robotic toys. The present disclosure can advantageously be used in other applications of adaptive signal processing systems (comprising for example, artificial neural networks), including: machine vision, pattern detection and pattern recognition, object classification, signal filtering, data segmentation, data compression, data mining, optimization and scheduling, complex mapping, and/or other applications.)
Examiner considers a device a type of apparatus and considered software-subroutines executed on distributed processing elements as depicted in Fig. 6A, as noted above and in [0280] In one or more implementations, the generalized learning apparatus of the disclosure may be implemented as a software library configured to be executed by a computerized neural network apparatus (e.g., containing a digital processor). In some implementations, the generalized learning apparatus may comprise a specialized hardware module (e.g., an embedded processor or controller). In some implementations, the spiking network apparatus may be implemented in a specialized or general purpose integrated circuit (e.g., ASIC, FPGA, and/or PLD). Myriad other implementations may exist that will be recognized by those of ordinary skill given the present disclosure.)
Regarding claim 17, the rejection of claim 6 is incorporated and Sin further teaches the system according to claim 6, wherein execution of the neural network is a process utilizing the neural network. (in ([0188] Different partitions of the network 600 may be configured, in some implementations, to perform specialized functionality. By way of example, the partition 610 may adapt raw sensory input of a robotic apparatus to internal format of the network [wherein execution of the neural network is a process utilizing the neural network] (e.g., convert analog signal representation to spiking) using for example, methodology described in U.S. patent application Ser. No. 13/314,066, filed Dec. 7, 2001, entitled "NEURAL NETWORK APPARATUS AND METHODS FOR SIGNAL CONVERSION", incorporated herein by reference in its entirety. The output {Y1} of the partition 610 may be forwarded to other partition, for example, partitions 620, 630, as illustrated by the broken line arrows 618, 618_1 in FIG. 6A. The partition 620 may implement visual object recognition learning that may require training input signal y.sup.d.sub.j(t) 616, such as for example an object template and/or a class designation (friend/foe). The output {Y2}) of the partition 620 may be forwarded to another partition (e.g., partition 630) as illustrated by the dashed line arrow 628 in FIG. 6A [wherein execution of the neural network is a process utilizing the neural network]…; Patritions as different label devices having different task executed on each respective circuitry as depicted inf Figs 6A, and in [0281] Advantageously, the present disclosure can be used to simplify and improve control tasks for a wide assortment of control applications including, without limitation, industrial control, adaptive signal processing, navigation, and robotics [wherein execution of the neural network is a process utilizing the neural network]. Exemplary implementations of the present disclosure may be useful in a variety of devices including without limitation prosthetic devices (such as artificial limbs), industrial control, autonomous and robotic apparatus, HVAC, and other electromechanical devices requiring accurate stabilization, set-point control, trajectory tracking functionality or other types of control. Examples of such robotic devices may include manufacturing robots (e.g., automotive), military devices, and medical devices (e.g., for surgical robots). Examples of autonomous navigation may include rovers (e.g., for extraterrestrial, underwater, hazardous exploration environment), unmanned air vehicles, underwater vehicles, smart appliances (e.g., ROOMBA.RTM.), and/or robotic toys. The present disclosure can advantageously be used in other applications of adaptive signal processing systems (comprising for example, artificial neural networks) [wherein execution of the neural network is a process utilizing the neural network], including: machine vision, pattern detection and pattern recognition, object classification, signal filtering, data segmentation, data compression, data mining, optimization and scheduling, complex mapping, and/or other applications.)
Regarding independent claim 18, Sin teaches a method for execution of a neural network by at least one first device and at least one second device, the method comprising: (in [0185] FIGS. 6A-6B illustrate exemplary implementations of reconfigurable partitioned neural network apparatus comprising generalized learning framework, described above. The network 600 of FIG. 6A may comprise several partitions 610, 620, 630, comprising one or more of nodes 602 receiving inputs 612 {X} via connections 604, and providing outputs via connections 608..)
communicating, by the first device, with the at least one second device through a communication network; executing, by the at least one first device and the at least one second device, the neural network, wherein a first part of the neural network is executed on the first device and a second part of the neural network is executed on the at least one second device, the first device transmits resultant data of an execution of the first part on the first device to the at least one second device, and the at least one second device executes the second part on the at least one second device based on at least the resultant data of the execution of the first part. (in As depicted in Fig. 6A-B and in [0185] FIGS. 6A-6B illustrate exemplary implementations of reconfigurable partitioned neural network apparatus comprising generalized learning framework, described above. The network 600 of FIG. 6A may comprise several partitions [communicating, by the first device, with the at least one second device through a communication network; executing, by the at least one first device and the at least one second device, the neural network, a executing, by the at least one first device and the at least one second device, the neural network, as one of several partitions of a neural network communicating through connections] 610, 620, 630, comprising one or more of nodes 602 receiving inputs 612 {X} via connections 604, and providing outputs via connections 608… [0188] Different partitions of the network 600 may be configured, in some implementations, to perform specialized functionality [wherein a first part of the neural network is executed on the first device and a second part of the neural network is executed on the at least one second device, the first device transmits resultant data of an execution of the first part on the first device to the at least one second device, and the at least one second device executes the second part on the at least one second device based on at least the resultant data of the execution of the first part]. By way of example, the partition 610 may adapt raw sensory input of a robotic apparatus to internal format of the network (e.g., convert analog signal representation to spiking) using for example, methodology described in U.S. patent application Ser. No. 13/314,066, filed Dec. 7, 2001, entitled "NEURAL NETWORK APPARATUS AND METHODS FOR SIGNAL CONVERSION", incorporated herein by reference in its entirety. The output {Y1} of the partition 610 [wherein a first part of the neural network is executed on the first device and a second part of the neural network is executed on the at least one second device, the first device transmits resultant data of an execution of the first part on the first device to the at least one second device, …] may be forwarded to other partitions, for example, partitions 620, 630, as illustrated by the broken line arrows 618, 618_1 in FIG. 6A. The partition 620 may implement visual object recognition learning that may require training input signal y.sup.d.sub.j(t) 616, such as for example an object template and/or a class designation (friend/foe). The output {Y2}) of the partition 620 may be forwarded to another partition (e.g., partition 630) as illustrated by the dashed line arrow 628 [the first device transmits resultant data of an execution of the first part on the first device to the at least one second device] in FIG. 6A. The partition 630 may implement motor control commands required for the robotic arm to reach and grasp the identified object, or motor commands configured to move robot or camera to a new location, which may require reinforcement signal r(t) 614. The partition 630 […and the at least one second device executes the second part on the at least one second device based on at least the resultant data of the execution of the first part.] may generate the output {Y} 638 of the network 600 implementing adaptive controller apparatus (e.g., the apparatus 520 of FIG. 5). The homogeneous configuration of the network 600, illustrated in FIG. 6A, may enable a single network comprising several generalized nodes of the same type to implement different learning tasks (e.g., reinforcement and supervised) simultaneously.
PNG
media_image1.png
700
520
media_image1.png
Greyscale
)
Additionally, wherein a first part of the neural network is executed on the first device and a second part of the neural network is executed on the at least one second device, Examiner considers a device as software-subroutines executed on distributed processing elements as depicted in Fig. 6A, as noted above and in [0280] In one or more implementations, the generalized learning apparatus of the disclosure may be implemented as a software library configured to be executed by a computerized neural network apparatus (e.g., containing a digital processor). In some implementations, the generalized learning apparatus may comprise a specialized hardware module (e.g., an embedded processor or controller). In some implementations, the spiking network apparatus may be implemented in a specialized or general purpose integrated circuit (e.g., ASIC, FPGA, and/or PLD). Myriad other implementations may exist that will be recognized by those of ordinary skill given the present disclosure.)
Regarding claim 19, the rejection of claim 18 is incorporated and Sin further teaches the method according to claim 18, wherein a plurality of layers of the neural network are split into at least the first part and the second part, the first part and the second part of the neural network being different layers of the neural network. (in [0185] FIGS. 6A-6B illustrate exemplary implementations of reconfigurable partitioned neural network [wherein a plurality of layers of the neural network are split into at least the first part and the second part, the first part and the second part of the neural network being different layers of the neural network] apparatus comprising generalized learning framework, described above. The network 600 of FIG. 6A may comprise several partitions 610, 620, 630, comprising one or more of nodes 602 receiving inputs 612 {X} via connections 604, and providing outputs via connections 608. [0186] In one or more implementations, the nodes 602 of the network 600 may comprise spiking neurons (e.g., the neurons 730 of FIG. 9, described below), the connections 604, 608 may be configured to carry spiking input into neurons, and spiking output from the neurons, respectively. The neurons 602 may be configured to generate responses (as described in, for example, U.S. patent application Ser. No. 13/152,105 filed on Jun. 2, 2011, and entitled "APPARATUS AND METHODS FOR TEMPORALLY PROXIMATE OBJECT RECOGNITION", incorporated by reference herein in its entirety) which may be propagated via feed-forward connections 608. [0187] In some implementations, the network 600 may comprise artificial neurons, such as for example, spiking neurons described by U.S. patent application Ser. No. 13/152,105 filed on Jun. 2, 2011, and entitled "APPARATUS AND METHODS FOR TEMPORALLY PROXIMATE OBJECT RECOGNITION", incorporated supra, artificial neurons with sigmoidal activation function, binary neurons (perceptron), radial basis function units, and/or fuzzy logic networks… [0188] Different partitions of the network 600 may be configured, in some implementations, to perform specialized functionality [wherein a plurality of layers of the neural network are split into at least the first part and the second part, the first part and the second part of the neural network being different layers of the neural network]…The homogeneous configuration of the network 600, illustrated in FIG. 6A, may enable a single network comprising several generalized nodes of the same type to implement different learning tasks (e.g., reinforcement and supervised) simultaneously.
Additionally, the incorporated references teach the connections associated with a plurality of layers, as incorporated [0001] : U.S. patent application Ser. No. 13/XXX,XXX entitled "DYNAMICALLY RECONFIGURABLE STOCHASTIC SPIKING NETWORK APPARATUS AND METHODS", US Pub No. US 20130325775), hereinafter ‘InCorpSin’, teaches in sending two singles from a second device partition to a first device partition as depicted in 6C, and in [0159] The partition 690 may be configured to receive the output 688 [wherein a plurality of layers of the neural network are split into at least the first part and the second part, the first part and the second part of the neural network being different layers of the neural network] of the partition 680 and to further process it (e.g., perform adaptive control) via a combination of reinforcement and supervised learning. In one or more implementations, the learning rule employed by the partition 690 may comprise a hybrid learning rule. The hybrid learning rule may comprise reinforcement and supervised learning combination, as described, for example, by Eqn. 34 below. Operation of the partition 690 during learning in this implementation may be aided by teaching signal 694 r(t) [wherein a plurality of layers of the neural network are split into at least the first part and the second part, the first part and the second part of the neural network being different layers of the neural network]. The teaching signal 694 r(t) may comprise (1) supervisory signal y.sup.d(t), which may provide, for example, desired locations (waypoints) for an autonomous robotic apparatus; and (2) reinforcement signal r(t), which may provide, for example, how close the apparatus navigates with respect to these waypoints.
PNG
media_image2.png
682
530
media_image2.png
Greyscale
Regarding claim 20, the rejection of claim 6 is incorporated and Sin further teaches the system according to claim 6, that is configured to communicate with the communicating at least one second device, comprising: at least one memory configured to store at least a first part of the neural network; and at least one processor configured to: execute the first part of the neural network stored in the least one memory; and transmit resultant data of an execution of the first part of the neural network to the at least one second device, wherein the resultant data transmitted to the at least one second device is used by the at least one second device to execute a second part of the neural network on the at least one second device, the first part of the neural network and the second part of the neural network being different layers of the neural network. (in [0185] FIGS. 6A-6B illustrate exemplary implementations of reconfigurable partitioned neural network [that is configured to communicate with the communicating at least one second device, comprising: at least one memory configured to store at least a first part of the neural network; and at least one processor configured to: execute the first part of the neural network stored in the least one memory; and transmit resultant data of an execution of the first part of the neural network to the at least one second device, wherein the resultant data transmitted to the at least one second device is used by the at least one second device to execute a second part of the neural network on the at least one second device, the first part of the neural network and the second part of the neural network being different layers of the neural network] apparatus comprising generalized learning framework, described above. The network 600 of FIG. 6A may comprise several partitions 610, 620, 630, comprising one or more of nodes 602 receiving inputs 612 {X} via connections 604, and providing outputs via connections 608. [0186] In one or more implementations, the nodes 602 of the network 600 may comprise spiking neurons (e.g., the neurons 730 of FIG. 9, described below), the connections 604, 608 may be configured to carry spiking input into neurons, and spiking output from the neurons, respectively. The neurons 602 may be configured to generate responses (as described in, for example, U.S. patent application Ser. No. 13/152,105 filed on Jun. 2, 2011, and entitled "APPARATUS AND METHODS FOR TEMPORALLY PROXIMATE OBJECT RECOGNITION", incorporated by reference herein in its entirety) which may be propagated via feed-forward connections 608. [0187] In some implementations, the network 600 may comprise artificial neurons, such as for example, spiking neurons described by U.S. patent application Ser. No. 13/152,105 filed on Jun. 2, 2011, and entitled "APPARATUS AND METHODS FOR TEMPORALLY PROXIMATE OBJECT RECOGNITION", incorporated supra, artificial neurons with sigmoidal activation function, binary neurons (perceptron), radial basis function units, and/or fuzzy logic networks… [0188] Different partitions of the network 600 may be configured, in some implementations, to perform specialized functionality [that is configured to communicate with the communicating at least one second device, comprising: at least one memory configured to store at least a first part of the neural network; and at least one processor configured to: execute the first part of the neural network stored in the least one memory; and transmit resultant data of an execution of the first part of the neural network to the at least one second device, wherein the resultant data transmitted to the at least one second device is used by the at least one second device to execute a second part of the neural network on the at least one second device, the first part of the neural network and the second part of the neural network being different layers of the neural network]…The homogeneous configuration of the network 600, illustrated in FIG. 6A, may enable a single network comprising several generalized nodes of the same type to implement different learning tasks (e.g., reinforcement and supervised) simultaneously.
Additionally, the incorporated references teach the connections associated with a plurality of layers, as incorporated [0001] : U.S. patent application Ser. No. 13/XXX,XXX entitled "DYNAMICALLY RECONFIGURABLE STOCHASTIC SPIKING NETWORK APPARATUS AND METHODS", US Pub No. US 20130325775), hereinafter ‘InCorpSin’, teaches in sending two singles from a second device partition to a first device partition as depicted in 6C, and in [0159] The partition 690 may be configured to receive the output 688 [that is configured to communicate with the communicating at least one second device, comprising: at least one memory configured to store at least a first part of the neural network; and at least one processor configured to: execute the first part of the neural network stored in the least one memory; and transmit resultant data of an execution of the first part of the neural network to the at least one second device,] of the partition 680 [that is configured to communicate with the communicating at least one second device, comprising: at least one memory configured to store at least a first part of the neural network; and at least one processor configured to: execute the first part of the neural network stored in the least one memory;] and to further process it (e.g., perform adaptive control) via a combination of reinforcement and supervised learning. In one or more implementations, the learning rule employed by the partition 690 [wherein the resultant data transmitted to the at least one second device is used by the at least one second device to execute a second part of the neural network on the at least one second device, the first part of the neural network and the second part of the neural network being different layers of the neural network] may comprise a hybrid learning rule. The hybrid learning rule may comprise reinforcement and supervised learning combination, as described, for example, by Eqn. 34 below. Operation of the partition 690 during learning in this implementation may be aided by teaching signal 694 r(t). The teaching signal 694 r(t) may comprise (1) supervisory signal y.sup.d(t), which may provide, for example, desired locations (waypoints) for an autonomous robotic apparatus; and (2) reinforcement signal r(t), which may provide, for example, how close the apparatus navigates with respect to these waypoints.
PNG
media_image2.png
682
530
media_image2.png
Greyscale
Regarding claim 21, the rejection of claim 20 is incorporated and Sin further teaches the system according to claim 20, wherein a plurality of layers of the neural network are split into at least the first part and the second part, the first part and the second part of the neural network being different layers of the neural network.
Regarding claim 22, the rejection of claim 6 is incorporated and Sin further teaches the system according to claim 6, wherein the first part of the neural network and the second part of the neural network are different layers of the neural network. (the incorporated references teach the connections associated with a plurality of layers, as incorporated [0001] : U.S. patent application Ser. No. 13/XXX,XXX entitled "DYNAMICALLY RECONFIGURABLE STOCHASTIC SPIKING NETWORK APPARATUS AND METHODS", US Pub No. US 20130325775), hereinafter ‘InCorpSin’, teaches in sending two singles from a second device partition to a first device partition as depicted in 6C [wherein the first part of the neural network and the second part of the neural network are different layers of the neural network.], and in [0158] FIG. 6C illustrates an implementation of dynamically configured neuronal network 660. The network 660 may comprise partitions 670 [wherein the first part of the neural network and the second part of the neural network are different layers of the neural network], 680, 690 [wherein the first part of the neural network and the second part of the neural network are different layers of the neural network]. The partition 670 may be configured to process (e.g., to perform compression, encoding, and/or other processes) the input signal 662 via an unsupervised learning task and to generate processed output {Y5}. The partition 680 may be configured to receive the output 678 of the partition 670 and to further process it, e.g., perform object recognition via supervised learning. Operation of the partition 680 during learning may be aided by training signal 674 r(t), comprising supervisory signal y.sup.d(t), such as for example, examples of desired object to be recognized.
PNG
media_image2.png
682
530
media_image2.png
Greyscale
)
Regarding claim 23, the rejection of claim 6 is incorporated and Sin further teaches the system according to claim 6, wherein the at least one device is configured to acquire sensor data to be processed by the neural network, the first part of the neural network includes an input layer of the neural network, and the second part of the neural network includes a layer later than the first part of the neural network. (the incorporated references teach the connections associated with a plurality of layers, as incorporated [0001] : U.S. patent application Ser. No. 13/XXX,XXX entitled "DYNAMICALLY RECONFIGURABLE STOCHASTIC SPIKING NETWORK APPARATUS AND METHODS", US Pub No. US 20130325775), hereinafter ‘InCorpSin’, teaches in sending two singles from a second device partition to a first device partition as depicted in 6C [wherein the first part of the neural network and the second part of the neural network are different layers of the neural network.], and in [0158] FIG. 6C illustrates an implementation of dynamically configured neuronal network 660. The network 660 may comprise partitions 670 [wherein the at least one device is configured to acquire sensor data to be processed by the neural network, the first part of the neural network includes an input layer of the neural network having an input layer for capturing {X1} input], 680, 690 [and the second part of the neural network includes a layer later than the first part of the neural network]. The partition 670 may be configured to process (e.g., to perform compression, encoding, and/or other processes) the input signal 662 via an unsupervised learning task and to generate processed output {Y5}. The partition 680 may be configured to receive the output 678 of the partition 670 and to further process it, e.g., perform object recognition via supervised learning. Operation of the partition 680 during learning may be aided by training signal 674 r(t), comprising supervisory signal y.sup.d(t), such as for example, examples of desired object to be recognized.
PNG
media_image2.png
682
530
media_image2.png
Greyscale
)
Regarding claim 24, the rejection of claim 6 is incorporated and Sin further teaches the system according to claim 6, wherein a plurality of layers of the neural network are split into at least the first part and the second part. (in (the incorporated references teach the connections associated with a plurality of layers, as incorporated [0001] : U.S. patent application Ser. No. 13/XXX,XXX entitled "DYNAMICALLY RECONFIGURABLE STOCHASTIC SPIKING NETWORK APPARATUS AND METHODS", US Pub No. US 20130325775), hereinafter ‘InCorpSin’, teaches in sending two singles from a second device partition to a first device partition as depicted in 6C [wherein a plurality of layers of the neural network are split into at least the first part and the second part, and in [0158] FIG. 6C illustrates an implementation of dynamically configured neuronal network 660. The network 660 may comprise partitions 670, 680, 690 [wherein a plurality of layers of the neural network are split into at least the first part and the second part]. The partition 670 may be configured to process (e.g., to perform compression, encoding, and/or other processes) the input signal 662 via an unsupervised learning task and to generate processed output {Y5}. The partition 680 may be configured to receive the output 678 of the partition 670 and to further process it, e.g., perform object recognition via supervised learning. Operation of the partition 680 during learning may be aided by training signal 674 r(t), comprising supervisory signal y.sup.d(t), such as for example, examples of desired object to be recognized.
PNG
media_image2.png
682
530
media_image2.png
Greyscale
)
Regarding claim 25, the rejection of claim 6 is incorporated and Sin further teaches the system according to claim 6, wherein the at least one first device and the at least one second device are at different locations. (in ([0188] Different partitions of the network 600 may be configured, in some implementations, to perform specialized functionality. By way of example, the partition 610 may adapt raw sensory input of a robotic apparatus to internal format of the network (e.g., convert analog signal representation to spiking) using for example, methodology described in U.S. patent application Ser. No. 13/314,066, filed Dec. 7, 2001, entitled "NEURAL NETWORK APPARATUS AND METHODS FOR SIGNAL CONVERSION", incorporated herein by reference in its entirety. The output {Y1} of the partition 610 may be forwarded to other partitions [wherein the at least one first device and the at least one second device are at different locations], for example, partitions 620, 630, as illustrated by the broken line arrows 618, 618_1 in FIG. 6A. The partition 620 may implement visual object recognition learning that may require training input signal y.sup.d.sub.j(t) 616, such as for example an object template and/or a class designation (friend/foe). The output {Y2}) of the partition 620 may be forwarded to another partition [wherein the at least one first device and the at least one second device are at different locations] (e.g., partition 630) as illustrated by the dashed line arrow 628 in FIG. 6A…; Patritions as different label devices having different task executed on each respective circuitry as depicted inf Figs 6A, and in [0281] Advantageously, the present disclosure can be used to simplify and improve control tasks for a wide assortment of control applications including, without limitation, industrial control, adaptive signal processing, navigation, and robotics [wherein the at least one first device and the at least one second device are at different locations]. Exemplary implementations of the present disclosure may be useful in a variety of devices including without limitation prosthetic devices (such as artificial limbs), industrial control, autonomous and robotic apparatus, HVAC, and other electromechanical devices requiring accurate stabilization, set-point control, trajectory tracking functionality or other types of control. Examples of such robotic devices may include manufacturing robots (e.g., automotive), military devices, and medical devices (e.g., for surgical robots) [wherein the at least one first device and the at least one second device are at different locations]. Examples of autonomous navigation may include rovers [wherein the at least one first device and the at least one second device are at different locations] (e.g., for extraterrestrial, underwater, hazardous exploration environment), unmanned air vehicles, underwater vehicles, smart appliances (e.g., ROOMBA.RTM.), and/or robotic toys. The present disclosure can advantageously be used in other applications of adaptive signal processing systems (comprising for example, artificial neural networks), including: machine vision, pattern detection and pattern recognition, object classification, signal filtering, data segmentation, data compression, data mining, optimization and scheduling, complex mapping, and/or other applications.)
Examiner considers a device a type of apparatus and considered software-subroutines executed on distributed processing elements as depicted in Fig. 6A, as noted above and in [0280] In one or more implementations, the generalized learning apparatus of the disclosure may be implemented as a software library configured to be executed by a computerized neural network apparatus (e.g., containing a digital processor). In some implementations, the generalized learning apparatus may comprise a specialized hardware module (e.g., an embedded processor or controller). In some implementations, the spiking network apparatus may be implemented in a specialized or general purpose integrated circuit (e.g., ASIC, FPGA, and/or PLD). Myriad other implementations may exist that will be recognized by those of ordinary skill given the present disclosure. And in 083] As used herein, the terms "microprocessor" and "digital processor" are meant generally to include digital processing devices. By way of non-limiting example, digital processing devices may include one or more of digital signal processors (DSPs), reduced instruction set computers (RISC), general-purpose (CISC) processors, microprocessors, gate arrays (e.g., field programmable gate arrays (FPGAs)), PLDs, reconfigurable computer fabrics (RCFs), array processors, secure microprocessors, application-specific integrated circuits (ASICs), and/or other digital processing devices. Such digital processors may be contained on a single unitary IC die, or distributed across multiple components.)
Regarding claim 26, the rejection of claim 6 is incorporated and Sin further teaches the system according to claim 6, wherein output data output from the second part of the neural network on the at least one second device by an execution of the second part of the neural network based on at least the resultant data of the execution of the first part on the at least one first device is a control command for controlling a manufacturing device or a robot. (As depicted in Figs 6A-B, and in [0013] By way of illustration, consider a mobile robot controlled by a neural network [wherein output data output from the second part of the neural network on the at least one second device by an execution of the second part of the neural network based on at least the resultant data of the execution of the first part on the at least one first device is a control command for controlling a manufacturing device or a robot], where the task of the robot is to move in an unknown environment and collect certain resources by the way of trial and error. This can be formulated as reinforcement learning tasks, where the network is supposed to maximize the reward signals (e.g., amount of the collected resource)… [0155] The PD block implementation denoted 474 may be configured to simultaneously implement reinforcement and supervised (RS) learning rules... By way of example, in some implementations reinforcement learning task may be to acquire resources by the mobile robot, where the reinforcement component r(t) provides information about acquired resources (reward signal) from the external environment, while at the same time a human expert shows the robot what should be desired output signal y.sup.d(t) to optimally avoid obstacles. By setting a higher coefficient to the supervised part of the performance function, the robot may be trained to try to acquire the resources if it does not contradict with human expert signal for avoiding obstacles.; And in [0188] Different partitions of the network 600 may be configured, in some implementations, to perform specialized functionality. By way of example, the partition 610 may adapt raw sensory input of a robotic apparatus [the execution of the first part on the at least one first device is a control command for controlling a manufacturing device or a robot] to internal format of the network (e.g., convert analog signal representation to spiking) using for example, methodology described in U.S. patent application Ser. No. 13/314,066, filed Dec. 7, 2001, entitled "NEURAL NETWORK APPARATUS AND METHODS FOR SIGNAL CONVERSION", incorporated herein by reference in its entirety. The output {Y1} of the partition 610 may be forwarded to other partitions, for example, partitions 620, 630, as illustrated by the broken line arrows 618, 618_1 in FIG. 6A [wherein output data output from the second part of the neural network on the at least one second device by an execution of the second part of the neural network based on at least the resultant data of the execution of the first part on the at least one first device is a control command for controlling a manufacturing device or a robot]. The partition 620 may implement visual object recognition learning that may require training input signal y.sup.d.sub.j(t) 616, such as for example an object template and/or a class designation (friend/foe). The output {Y2}) of the partition 620 may be forwarded to another partition (e.g., partition 630) as illustrated by the dashed line arrow 628 in FIG. 6A. The partition 630 may implement motor control commands required for the robotic arm to reach and grasp the identified object, or motor commands configured to move robot [wherein output data output from the second part of the neural network on the at least one second device by an execution of the second part of the neural network based on at least the resultant data of the execution of the first part on the at least one first device is a control command for controlling a manufacturing device or a robot] or camera to a new location, which may require reinforcement signal r(t) 614. The partition 630 may generate the output {Y} 638 of the network 600 implementing adaptive controller apparatus (e.g., the apparatus 520 of FIG. 5). The homogeneous configuration of the network 600, illustrated in FIG. 6A, may enable a single network comprising several generalized nodes of the same type to implement different learning tasks (e.g., reinforcement and supervised) simultaneously.)
Regarding claim 27, the rejection of claim 9 is incorporated and Sin further teaches the system according to claim 9, wherein output data, output from the third part of the neural network on the at least one first device by an execution of the third part of the neural network based on at least the another resultant data of the execution of the second part on the at least one second device, is a control command for controlling a manufacturing device or a robot. (As depicted in Figs 6A-B, and in [0013] By way of illustration, consider a mobile robot controlled by a neural network [wherein output data output from the second part of the neural network on the at least one second device by an execution of the second part of the neural network based on at least the resultant data of the execution of the first part on the at least one first device is a control command for controlling a manufacturing device or a robot], where the task of the robot is to move in an unknown environment and collect certain resources by the way of trial and error. This can be formulated as reinforcement learning tasks, where the network is supposed to maximize the reward signals (e.g., amount of the collected resource)… [0155] The PD block implementation denoted 474 may be configured to simultaneously implement reinforcement and supervised (RS) learning rules... By way of example, in some implementations reinforcement learning task may be to acquire resources by the mobile robot, where the reinforcement component r(t) provides information about acquired resources (reward signal) from the external environment, while at the same time a human expert shows the robot what should be desired output signal y.sup.d(t) to optimally avoid obstacles. By setting a higher coefficient to the supervised part of the performance function, the robot may be trained to try to acquire the resources if it does not contradict with human expert signal for avoiding obstacles.; And in [0188] Different partitions of the network 600 may be configured, in some implementations, to perform specialized functionality. By way of example, the partition 610 may adapt raw sensory input of a robotic apparatus [the execution of the first part on the at least one first device is a control command for controlling a manufacturing device or a robot] to internal format of the network (e.g., convert analog signal representation to spiking) using for example, methodology described in U.S. patent application Ser. No. 13/314,066, filed Dec. 7, 2001, entitled "NEURAL NETWORK APPARATUS AND METHODS FOR SIGNAL CONVERSION", incorporated herein by reference in its entirety. The output {Y1} of the partition 610 may be forwarded to other partitions, for example, partitions 620, 630, as illustrated by the broken line arrows 618, 618_1 in FIG. 6A [wherein output data output from the second part of the neural network on the at least one second device by an execution of the second part of the neural network based on at least the resultant data of the execution of the first part on the at least one first device is a control command for controlling a manufacturing device or a robot]. The partition 620 may implement visual object recognition learning that may require training input signal y.sup.d.sub.j(t) 616, such as for example an object template and/or a class designation (friend/foe). The output {Y2}) of the partition 620 may be forwarded to another partition (e.g., partition 630) as illustrated by the dashed line arrow 628 in FIG. 6A. The partition 630 may implement motor control commands required for the robotic arm to reach and grasp the identified object, or motor commands configured to move robot [wherein output data output from the second part of the neural network on the at least one second device by an execution of the second part of the neural network based on at least the resultant data of the execution of the first part on the at least one first device is a control command for controlling a manufacturing device or a robot] or camera to a new location, which may require reinforcement signal r(t) 614. The partition 630 may generate the output {Y} 638 of the network 600 implementing adaptive controller apparatus (e.g., the apparatus 520 of FIG. 5). The homogeneous configuration of the network 600, illustrated in FIG. 6A, may enable a single network comprising several generalized nodes of the same type to implement different learning tasks (e.g., reinforcement and supervised) simultaneously.)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Sinyavskiy et al. (US 20130325768, hereinafter ‘Sin’) in view of Sinyavskiy et al. (US 9489623, hereinafter ‘Sin_Polo’).
Regarding claim 13, the rejection of claim 6 is incorporated and Sin further teaches the system according to 6, wherein the neural network to be executed by the at least one first device and the at least one second device is a neural network having been trained through a back propagation. (in [0155] The PD block implementation denoted 474 may be configured to simultaneously implement reinforcement and supervised (RS) learning rules… By way of example, in some implementations reinforcement learning task may be to acquire resources by the mobile robot, where the reinforcement component r(t) provides information about acquired resources (reward signal) from the external environment, while at the same time a human expert shows the robot what should be desired output signal y.sup.d(t) to optimally avoid obstacles. By setting a higher coefficient to the supervised part of the performance function, the robot may be trained [wherein the neural network to be executed by the at least one first device and the at least one second device is a neural network having been trained] to try to acquire the resources if it does not contradict with human expert signal for avoiding obstacles.
And where in the trained robot includes claimed neural network as a spiking neural network, in [0138] In some implementations the PD block may transmit the external signal r to the learning block (as illustrated by the arrow 404_1) so that: F(t)=r(t), (Eqn. 33) where signal r provides reward and/or punishment signals from the external environment. By way of illustration, a mobile robot, controlled by spiking neural network, may be configured to collect resources (e.g., clean up trash) while avoiding obstacles (e.g., furniture, walls). In this example, the signal r may comprise a positive indication (e.g., representing a reward) at the moment when the robot acquires the resource (e.g., picks up a piece of rubbish) and a negative indication (e.g., representing a punishment) when the robot collides with an obstacle (e.g., wall). Upon receiving the reinforcement signal r, the spiking neural network [wherein the neural network to be executed by the at least one first device and the at least one second device is a neural network having been trained] of the robot controller may change its parameters (e.g., neuron connection weights) in order to maximize the function F (e.g., maximize the reward and minimize the punishment).)
Sin does not express teach the trained spiking neural network as a neural network having been trained through a back propagation.
Sin_Polo does express teach the trained spiking neural network as a neural network having been trained through a back propagation. (in 10:25-33: FIG. 2 is a block diagram depicting a spiking neuron network configured for error back propagation [a neural network having been trained through a back propagation], in accordance with one or more implementations. The network 200 may comprise two layers of spiking neurons: layer one (or layer x) comprising neurons 202, 204; and layer two (or layer y), comprising neuron 222. The first layer neurons 202, 204 may receive input 208, 209 and communicate their output to the second layer neuron 222 via connections 212, 214. And in 2:13-22: Spiking neural networks may be utilized in a variety of applications such as, for example, image processing, object recognition, classification, robotics, and/or other. Such networks may comprise multiple nodes (e.g., units, neurons) interconnected with one another via, e.g., synapses (doublets, connections). As used herein “back propagation” is used without limitation as an abbreviation for “backward propagation of errors” which is a method commonly used for training artificial neural networks…)
Sin_Polo and Sin are analogous art because both involve developing information retrieval and processing techniques using machine learning systems and algorithms.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for retrieving information from images by considering geometric features from the sensors as input, such volume and shape parameters of objects captured within the image data as disclosed by Sin_Polo with the method of developing information retrieval and processing techniques using a neural network model as disclosed by Sin.
One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Sin_Polo and Sin as noted above. Doing so allowing for implementing backwards error propagation in distributed networks be utilized in machine learning tasks, (Sin_Polo, Abstract & 2:49-53).
Alternatively, claims 8 and 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Sinyavskiy et al. (US 20130325768, hereinafter ‘Sin’) in view of Kasabov et al. (US 20030149676, hereinafter ‘Kasa’).
Regarding claim 8, the rejection of the claim as noted above is incorporated rejections above.
Alternatively, Kasa does expressly disclose the use of a vector data type, in [0037] FIG. 2 illustrates the computer-implemented aspects of the invention stored in memory 6 and/or mass storage 14 and arranged to operate with processor 4. The preferred system is arranged as an evolving connectionist system 20. The system 20 is provided with one or more neural network modules or NNM 22 [… a characteristic vector result from the execution of the first part on the at least one first device]. The arrangement and operation of the neural network module(s) 22 forms the basis of the invention and will be further described below… [0043] The neural network module 22 further comprises rule base layer 48 having one or more rule nodes 50. Each rule node 50 is defined by two vectors of connection weights W1(r) and W2(r) [… a characteristic vector result from the execution of the first part on the at least one first device]. Connection weight W1(r) is preferably adjusted through unsupervised learning based on similarity measure within a local area of the problem space. W2(r), on the other hand, is preferably adjusted through supervised learning based on output error, or on reinforcement learning based on output hints. Connection weights W1(r) and W2(r) are further described below.
Kasa and Sin are analogous art because both involve developing information retrieval and processing techniques using machine learning systems and algorithms.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art developing information retrieval and processing techniques using neural network module forming part of an adaptive learning system based on neural network models as disclosed by Kasa with the method of developing information retrieval and processing techniques using neural network models as disclosed by Sin.
One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Kasa and Sin as noted above. Doing so allow for developing and implementing adaptive learning systems able to learn quickly from a large amount of data, adapt incrementally in an on-line mode, have an open structure so as to allow dynamic creation of new modules, memories information that can be used later, (Kasa, Abstract & 0003).
Regarding claim 11, the rejection of the claim as noted above is incorporated rejections above.
Alternatively, Kasa does expressly disclose the use of a vector data type, in [0037] FIG. 2 illustrates the computer-implemented aspects of the invention stored in memory 6 and/or mass storage 14 and arranged to operate with processor 4. The preferred system is arranged as an evolving connectionist system 20. The system 20 is provided with one or more neural network modules or NNM 22 [… a characteristic vector result from the execution of the second part on the at least one second device]. The arrangement and operation of the neural network module(s) 22 forms the basis of the invention and will be further described below… [0043] The neural network module 22 further comprises rule base layer 48 having one or more rule nodes 50. Each rule node 50 is defined by two vectors of connection weights W1(r) and W2(r) [… a characteristic vector result from the execution of the second part on the at least one second device.]. Connection weight W1(r) is preferably adjusted through unsupervised learning based on similarity measure within a local area of the problem space. W2(r), on the other hand, is preferably adjusted through supervised learning based on output error, or on reinforcement learning based on output hints. Connection weights W1(r) and W2(r) are further described below.
Kasa and Sin are analogous art because both involve developing information retrieval and processing techniques using machine learning systems and algorithms.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art developing information retrieval and processing techniques using neural network module forming part of an adaptive learning system based on neural network models as disclosed by Kasa with the method of developing information retrieval and processing techniques using neural network models as disclosed by Sin.
One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Kasa and Sin as noted above. Doing so allow for developing and implementing adaptive learning systems able to learn quickly from a large amount of data, adapt incrementally in an on-line mode, have an open structure so as to allow dynamic creation of new modules, memories information that can be used later, (Kasa, Abstract & 0003).
Regarding claim 12, the rejection of the claim as noted above is incorporated rejections above.
Alternatively, Kasa does expressly disclose the use of weights, in [0037] FIG. 2 illustrates the computer-implemented aspects of the invention stored in memory 6 and/or mass storage 14 and arranged to operate with processor 4. The preferred system is arranged as an evolving connectionist system 20. The system 20 is provided with one or more neural network modules or NNM 22 […group of weights associated with layers]. The arrangement and operation of the neural network module(s) 22 forms the basis of the invention and will be further described below… [0043] The neural network module 22 further comprises rule base layer 48 having one or more rule nodes 50 [… group of weights associated with layers]. Each rule node 50 is defined by two vectors of connection weights W1(r) and W2(r) [… group of weights associated with layers]. Connection weight W1(r) is preferably adjusted through unsupervised learning based on similarity measure within a local area of the problem space. W2(r), on the other hand, is preferably adjusted through supervised learning based on output error, or on reinforcement learning based on output hints. Connection weights W1(r) and W2(r) are further described below.
Kasa and Sin are analogous art because both involve developing information retrieval and processing techniques using machine learning systems and algorithms.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art developing information retrieval and processing techniques using neural network module forming part of an adaptive learning system based on neural network models as disclosed by Kasa with the method of developing information retrieval and processing techniques using neural network models as disclosed by Sin.
One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Kasa and Sin as noted above. Doing so allow for developing and implementing adaptive learning systems able to learn quickly from a large amount of data, adapt incrementally in an on-line mode, have an open structure so as to allow dynamic creation of new modules, memories information that can be used later, (Kasa, Abstract & 0003).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Breed (US 9102220) teaches sequential neural network models of a sequential neural network of robotic control for processing an identification neural network to determine the identification of the occupying item, the data used by the position/size determination neural network to determine the position of the occupying item, the data used by the orientation determination neural network, the data used by the position determination neural networks which may all be different from one another.
Izhikevich et al. (US 9764468): teaches the arrangements of neural network model based operations as depicted in Fig. 5:
PNG
media_image3.png
572
358
media_image3.png
Greyscale
Cosic (US 9443192): teaches distribution of neural network components as depicted in Fig. 38 and in 67:4-61: a device or system for autonomous application operating. The device or system may include a processor coupled to a memory unit. The device or system may further include an application, running on the processor, for performing operations on a computing device. The device or system may further include an interface for receiving a first instruction set and a second instruction set, the interface further configured to receive a new instruction set, wherein the first, the second, and the new instruction sets are executed by the processor and are part of the application for performing operations on the computing device. The device or system may further include a knowledgebase, neural network, or other repository configured to store at least one portion of the first instruction set and at least one portion of the second instruction set, the knowledgebase, neural network, or other repository comprising a plurality of portions of instruction sets. The device or system may further include a decision-making unit configured to compare at least one portion of the new instruction set with at least one portion of the first instruction set from the knowledgebase, neural network, or other repository. The decision-making unit may also be configured to determine that there is a substantial similarity between the new instruction set and the first instruction set from the knowledgebase, neural network, or other repository. The processor may then be caused to execute the second instruction set from the knowledgebase, neural network, or other repository. Any of the operations of the described elements can be performed repeatedly and/or in different orders in alternate embodiments. Specifically, in this example, Processor 11 can be implemented as a device or processing circuit that receives Software Application's 120 instructions, data, and/or other information from Memory 12… Acquisition and Modification Interface 110 may provide Software Application's 120 instructions, data, and/or other information to Artificial Intelligence Unit 130. Artificial Intelligence Unit 130 may learn the operation of Software Application 120 by storing the knowledge of its operation into Knowledgebase 530, Neural Network 850, or other repository. Decision-making Unit 540 may then anticipate or determine Software Application's 120 instructions, data, and/or other information most likely to be used, implemented, or executed in the future.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLUWATOSIN ALABI whose telephone number is (571)272-0516. The examiner can normally be reached Monday-Friday, 8:00am-5:00pm EST..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/OLUWATOSIN ALABI/ Primary Examiner, Art Unit 2129