The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This communication is responsive to Amendment filed 10/31/2025.
Claims 1-20 have been examined.
Response to Amendment
In the instant amendment, claims have been amended.
The 35 USC §112 rejection over claims 1-20 are withdrawn in view of Applicant’s amendments.
Information Disclosure Statement
As required by M.P.E.P. 609, the applicant’s submissions of the Information Disclosure Statement dated 02/26/2018 and 03/06/2017 are acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 are rejected under 35 U.S.C. 112, first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor(s), at the time the application was filed, had possession of the claimed invention.
In this case, the limitations recited in claims 1, 11 and 20 of "the execution of the compiled binary of the NN model being post-training of the NN model", “updating the set of layers that are mutable during execution of a compiled binary of the NN model without re-compiling the NN model, the updating being post-training of the NN model”, and “updating, while the compiled binary of the neural network model is currently executing on the computing device and without recompiling the compiled binary” cannot be found specifically in the specification. After examiner has carefully reviewed such drawings, examiner cannot find the cited drawings to support "the execution of the compiled binary of the NN model being post-training of the NN model", “updating the set of layers that are mutable during execution of a compiled binary of the NN model without re-compiling the NN model, the updating being post-training of the NN model”, and “updating, while the compiled binary of the neural network model is currently executing on the computing device and without recompiling the compiled binary”. Furthermore, after reviewing the specification, examiner has only found the specification to recite: "For training a deep neural network, a common approach is utilizing a graphical processing unit (GPU), and also for executing the deep neural network on new input data post-training." (paragraph 0014) and " Implementations of the subject technology described herein improve the computing functionality of an electronic device by enabling parameters (e.g., weights) of a neural network to be updated while being executed by the electronic device without expending additional computing resources required when recompiling the neural network in order to update the parameters" (paragraph 0015). It does not describe "the execution of the compiled binary of the NN model being post-training of the NN model", “updating the set of layers that are mutable during execution of a compiled binary of the NN model without re-compiling the NN model, the updating being post-training of the NN model”, and “updating, while the compiled binary of the neural network model is currently executing on the computing device and without recompiling the compiled binary”. Therefore, examiner requests further support or removal of such new matter.
Claims 2-10, 12-19 and 20 are rejected based on dependency to claims 1, 11 and 19.
Allowable Subject Matter
Claims 2, 4-6, 8-10, 12, 14-16, and 18-19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112, 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1 is rejected under 35 U.S.C. 103 as being unpatentable over US 2009/0024491 to Choubey in view of US 2018/0144244 to Masoud et al. (hereafter “Masoud”) and US 2001/0051858 to Liang et al. (hereafter “Liang”)
As per claim 1, Choubey does not explicitly disclose a method comprising:
receiving code corresponding to a neural network (NN) model (FIG. 4; paragraph 0036-0037: “An embodiment of act 255 can further include creating and executing a recurrent neural network classifier algorithm operable to generate the output of the best mode of procurement of assets 105, 110, and 115 for illustration to the operator.”) and a set of weights for the NN model (FIG. 4; paragraphs 0040 and 0043); and
generating data corresponding to a set of layers that are mutable in the NN model (FIG. 4; paragraphs 0039-0043: “input values received or acquired for the following group of parameters”, “result or output communicated from each of the input nodes … is generally representative of a stat of the node”, and “each connection 300 leading from the input nodes 280 … generally represents an assigned empirical value of a weight” [Wingdings font/0xE0] input, state of node and weight are parts of the layer (FIG. 4) and they are changeable), wherein the generated data enables updating the set of layers that are mutable during execution of a binary of the NN model (FIG. 4; paragraphs 0039-0043: “input values received or acquired for the following group of parameters”, “result or output communicated from each of the input nodes … is generally representative of a stat of the nod”, and “each connection 300 leading from the input nodes 280 … generally represents an assigned empirical value of a weight” [Wingdings font/0xE0] input, state of node and weight are parts of the layer (FIG. 4) and they are changeable) of a binary of the NN model (FIG. 4; paragraph 0036-0037: “An embodiment of act 255 can further include creating and executing a recurrent neural network classifier algorithm operable to generate the output of the best mode of procurement of assets 105, 110, and 115 for illustration to the operator.” [Wingdings font/0xE0] the neural network/model are executable algorithm [Wingdings font/0xE0] binary file).
Masoud further discloses a compiled binary of the NN model (paragraph 0059).
It would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to combine a teaching of Masoud into Choubey’s teaching because it would provide for the purpose of receives, processes, and analyzes characteristics of medical images using an algorithm reflecting prior training of the deep neural network (Masoud, paragraph 0059).
Liang further discloses the execution of the compiled binary of the NN model being post-training of the NN model (paragraph 0020).
It would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to combine a teaching of Liang into Choubey’s teaching and Masoud’s teaching because it would provide for the purpose of conducting real time quality prediction and provide the appropriate ranges of the parameters of the injection molding machine. (Liang, paragraph 0009).
Claim 3 rejected under 35 U.S.C. 103 as being unpatentable over Choubey in view of Masoud and Liang, as applied to claim 1, and further in view of US 2019/0164037 to Kim et al. (hereafter Kim)
As per claim 3, Choubey does not explicitly disclose wherein the code further includes parameters for bias values and scale values corresponding to the weights for the NN model.
Kim further discloses wherein the code further includes parameters for bias values (paragraph 0072) and scale values corresponding to the weights for the NN model (paragraphs 0059 and 0072).
It would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to combine a teaching of Kim into Choubey’s teaching, Maoud’s teaching and Liang’s teaching because it would provide for the purpose of using a systolic array and a method thereof using the operational result for one layer as an input to the operation for a next layer, while using the systolic array easily, and efficiently storing an input feature map and an output feature map (Kim, paragraph 0008).
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Choubey in view of Masoud and Liang, as applied to claim 1, and further in view of US 2019/0354708 to Fisher et al. (hereafter “Fisher”)
As per claim 7, Choubey does not explicitly disclose wherein the data includes information corresponding to offsets of a set of operations performed by a respective mutable layer of the NN model.
Fisher further discloses wherein the data includes information corresponding to offsets of a set of operations performed by a respective mutable layer of the NN model (paragraph 0163).
It would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to combine a teaching of Fisher into Choubey’s teaching, Masoud’s teaching and Liang’s teaching because it would provide for the purpose of the leaves of a metadata representation may include pointers to the stored data for a volume, or portion of a volume, where a logical address, or a volume and offset, may be used to identify and navigate through the metadata representation to reach one or more leaf nodes that reference stored data corresponding to the logical address (Fisher, paragraph 0163).
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Choubey in further view of Masoud, Liang and US 6,901440 to Bimm et al. (hereafter “Bimm”)
As per claim 11, Choubey discloses system comprising;
a processor (paragraph 0018);
a memory device containing instructions (paragraph 0018), which when executed by the processor cause the processor to:
receive code corresponding to a neural network (NN) model (FIG. 4; paragraph 0036-0037: “An embodiment of act 255 can further include creating and executing a recurrent neural network classifier algorithm operable to generate the output of the best mode of procurement of assets 105, 110, and 115 for illustration to the operator.”) and a set of weights for the NN model (FIG. 4; paragraphs 0040 and 0043); and
generate data corresponding to a set of layers that are mutable in the NN model (FIG. 4; paragraphs 0039-0043: “input values received or acquired for the following group of parameters”, “result or output communicated from each of the input nodes … is generally representative of a stat of the node”, and “each connection 300 leading from the input nodes 280 … generally represents an assigned empirical value of a weight” [Wingdings font/0xE0] input, state of node and weight are parts of the layer (FIG. 4) and they are changeable), wherein the generated data enables updating the set of layers that are mutable during execution of a compiled binary of the NN model (FIG. 4; paragraphs 0039-0043: “input values received or acquired for the following group of parameters”, “result or output communicated from each of the input nodes … is generally representative of a stat of the nod”, and “each connection 300 leading from the input nodes 280 … generally represents an assigned empirical value of a weight” [Wingdings font/0xE0] input, state of node and weight are parts of the layer (FIG. 4) and they are changeable) without re-compiling the NN model (FIG. 4; paragraph 0036-0037: “An embodiment of act 255 can further include creating and executing a recurrent neural network classifier algorithm operable to generate the output of the best mode of procurement of assets 105, 110, and 115 for illustration to the operator.” [Wingdings font/0xE0] the neural network/model are executable algorithm [Wingdings font/0xE0] binary file).
Masoud further discloses a compiled binary of the NN model (paragraph 0059).
It would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to combine a teaching of Masoud into Choubey’s teaching because it would provide for the purpose of receives, processes, and analyzes characteristics of medical images using an algorithm reflecting prior training of the deep neural network (Masoud, paragraph 0059).
Liang further discloses the execution of the compiled binary of the NN model being post-training of the NN model (paragraph 0020).
It would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to combine a teaching of Liang into Choubey’s teaching and Masoud’s teaching because it would provide for the purpose of conducting real time quality prediction and provide the appropriate ranges of the parameters of the injection molding machine. (Liang, paragraph 0009).
Bimm further discloses without re-compiling the NN model (column 3: line 60-65; column 19: 20-25).
It would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to combine a teaching of Bimm into Choubey’s teaching, Masoud’s teaching, and Liang’s teaching because it would provide for the purpose of incorporating object behavior concepts with the existing network management approach to create a SMS/NMS/EMS/OSS that significantly reduces the human effort to integrate network element configuration and provisioning for new and modified network elements (Bimm, column 19).
Claim 13 rejected under 35 U.S.C. 103 as being unpatentable over Choubey in view of Masoud, Liang and Bimm, as applied to claim 11, and further in view of US 2019/0164037 to Kim et al. (hereafter Kim)
As per claim 13, Choubey does not explicitly disclose wherein the code further includes parameters for bias values and scale values corresponding to the weights for the NN model.
Kim further discloses wherein the code further includes parameters for bias values (paragraph 0072) and scale values corresponding to the weights for the NN model (paragraphs 0059 and 0072).
It would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to combine a teaching of Kim into Choubey’s teaching, Maoud’s teaching, Liang’s teaching and Bimm’s teaching because it would provide for the purpose of using a systolic array and a method thereof using the operational result for one layer as an input to the operation for a next layer, while using the systolic array easily, and efficiently storing an input feature map and an output feature map (Kim, paragraph 0008).
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Choubey in view of Masoud, Liang and Bimm, as applied to claim 11, and further in view of US 2019/0354708 to Fisher et al. (hereafter “Fisher”)
As per claim 17, Choubey does not explicitly disclose wherein the data includes information corresponding to offsets of a set of operations performed by a respective mutable layer of the NN model.
Fisher further discloses wherein the data includes information corresponding to offsets of a set of operations performed by a respective mutable layer of the NN model (paragraph 0163).
It would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to combine a teaching of Fisher into Choubey’s teaching, Masoud’s teaching and Liang’s teaching because it would provide for the purpose of the leaves of a metadata representation may include pointers to the stored data for a volume, or portion of a volume, where a logical address, or a volume and offset, may be used to identify and navigate through the metadata representation to reach one or more leaf nodes that reference stored data corresponding to the logical address (Fisher, paragraph 0163).
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Choubey in further view of Masoud, Bimm and US 2001/0034628 to Eder.
As per claim 20, Choubey discloses a non-transitory computer-readable medium comprising instructions, which when executed by a computing device, cause the computing device to perform operations comprising:
receiving a weight including information corresponding to a set of values for updating a set of weights of a neural network model (FIG. 4; paragraphs 0039-0043: “input values received or acquired for the following group of parameters”, “result or output communicated from each of the input nodes … is generally representative of a stat of the nod”, and “each connection 300 leading from the input nodes 280 … generally represents an assigned empirical value of a weight” [Wingdings font/0xE0] input, state of node and weight are parts of the layer (FIG. 4) and they are changeable) of a binary of the NN model (FIG. 4; paragraph 0036-0037: “An embodiment of act 255 can further include creating and executing a recurrent neural network classifier algorithm operable to generate the output of the best mode of procurement of assets 105, 110, and 115 for illustration to the operator.” [Wingdings font/0xE0] the neural network/model are executable algorithm [Wingdings font/0xE0] binary file), a binary of the neural network model currently executing on the computing device (FIG. 4; paragraph 0036-0037: “An embodiment of act 255 can further include creating and executing a recurrent neural network classifier algorithm operable to generate the output of the best mode of procurement of assets 105, 110, and 115 for illustration to the operator.”);
determining data for updating the set of weights of the neural network model based on information provided in the compiled binary of the neural network model (FIG. 4; paragraphs 0036-0037 and 0040-0041: during the execution of the neural network/model [Wingdings font/0xE0] receiving input and calculating the weight based on at least on the provided input); and
updating, while the compiled binary of the neural network model is currently executing on the computing device (FIG. 4; paragraphs 0036-0037 and 0040-0041: during the execution of the neural network/model [Wingdings font/0xE0] receiving input and calculating the weight based on at least on the provided input), the set of weights of the neural network model based at least in part on the data and the weight (FIG. 4; paragraphs 0036-0037 and 0040-0041: the weight are generated based on the input and/or the other weight (i.e., w5-8 are generated based on inputs and w1-6).
Masoud further discloses a compiled binary of the NN model (paragraph 0059).
It would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to combine a teaching of Masoud into Choubey’s teaching because it would provide for the purpose of receives, processes, and analyzes characteristics of medical images using an algorithm reflecting prior training of the deep neural network (Masoud, paragraph 0059).
Bimm further discloses without re-compiling the NN model (column 3: line 60-65; column 19: 20-25).
It would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to combine a teaching of Bimm into Choubey’s teaching, and Masoud’s teaching because it would provide for the purpose of incorporating object behavior concepts with the existing network management approach to create a SMS/NMS/EMS/OSS that significantly reduces the human effort to integrate network element configuration and provisioning for new and modified network elements (Bimm, column 19).
Eder further discloses the weight file (FIG. 2; paragraphs 0048 and 0062).
It would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to combine a teaching of Eder into Choubey’s teaching, Masoud’s teaching, and Bimm’s teaching because it would provide for the purpose of all extracted information concerning revenue, expenses, capital and elements of value is stored in a file or table (hereinafter, table) within an application database (Eder, paragraph 0048).
Response to Arguments
Applicant's arguments filed on 10/31/2025 have been fully considered but they are not persuasive for the following reasons:
Claim rejections-35 U.S.C 112 (Remarks, page 6)
The applicants argue “Paragraph [0015] for example, describes that “[i]mplementations of the subject technology described herein improve the computing functionality of an electronic device by enabling parameters (e.g., weights) of a neural network to be updated while being executed by the electronic device without expending additional computing resources required when recompiling the neural network in order to update the parameters.” Paragraph [0027] describes that “[t]he compiler 252 generates a compiled binary of the NN model, and during compilation is configured to generate metadata that enables weights of the NN model to be updated during runtime.” Accordingly, this assertion in the Office Action is expressly in error.” (Remarks, page 6)
The Examiner respectfully disagrees for following reasons.
The applicants provide paragraphs 0015 and 0027 “for example, describes that “[i]mplementations of the subject technology described herein improve the computing functionality of an electronic device by enabling parameters (e.g., weights) of a neural network to be updated while being executed by the electronic device without expending additional computing resources required when recompiling the neural network in order to update the parameters.” And “[t]he compiler 252 generates a compiled binary of the NN model, and during compilation is configured to generate metadata that enables weights of the NN model to be updated during runtime.” (Remark, page 6). However, they do describe "the execution of the compiled binary of the NN model being post-training of the NN model", “updating the set of layers that are mutable during execution of a compiled binary of the NN model without re-compiling the NN model, the updating being post-training of the NN model”, and “updating, while the compiled binary of the neural network model is currently executing on the computing device and without recompiling the compiled binary”. Therefore, examiner requests further support or removal of such new matter.
Independent claim 1 (Remarks, page 7)
Applicants argue “The Office action asserts that in Choubey input, state of node and weight are parts of the layer (FIG. 4) and they are changeable … however, FIG. 4 of Choubey does not illustrate or state that state of node and weight are changeable …” (Remarks, page 7)
The Examiner respectfully disagrees.
Choubey paragraphs 0030-0032 and 0039 states:
[0030] Act 240 includes analyzing the data of act 235 to calculate and display costs associated with current utilization of the assets 105, 110, and 115. Act 245 includes calculating a projected demand or utilization of the at least one asset 105, 110, and 115. An embodiment of the act 245 includes calculating a trend or slope of the acquired or historical data for the measured utilization of the asset 105, 110, and 115 over a selected time interval. The act 245 can include executing a linear or non-linear regression analysis, a least squares analysis, or other conventional mathematical techniques to calculate a slope (e.g., assets per day) approximating the trend in the acquired data of the utilization of the selected asset 105, 110, and 115 over the selected time interval (e.g., 365 days, monthly). The act 245 can further include aggregating (e.g., minimum, maximum, average, sum, count, etc.) and/or normalizing the slope (e.g., to a value of one). The act 245 can further include multiplying the calculated slope with a selected projected time interval so as to calculate the projected demand or utilization of the asset 105, 110, 115 for the projected time interval. The calculated projected demand can be adjusted with one or more periodically upgraded factors for existing assets 105, 110 and 115 and one or more business direction factors. For example, the upgrade factor can be adjusted based on comparison of performance of existing to new assets 105, 110, and 115. The factors can also be representative of a predicted useful life of the asset 105, 110, 115. Values of the factors for the performance or useful life can be updated based on the acquired data from the assets 105, 110, and 115 over time.
[0031] An embodiment of calculating the projected demand looking forward in time can also adjusted by a business adjustment factor representative of a business direction as indicated by the user. For example, the business adjustment factor can be calculated to reflect inputted user information for expansion or shrinkage of the facility, addition or removal of departments or services, local competition, etc. received via the user input device 165. The projected demand would then be calculated by multiplying the number of assets 105, 110, and 115, the normalized value of the calculated slope approximating the trend in demand, the upgrade factor, and the business adjustment factor. Act 245 can further include communicating the projected demand over the projected time interval for illustration or display at the output device 170. An example of the projected demand can be for a projected rental demand of the selected asset 105, 110, and 115. An embodiment of act 245 can also include dependence on parameters for demographic changes of each department, growth of each department, etc., and adjusting the projected demand in accordance or in correlation to the value of the parameters. For example, act 245 can include calculating a patient increase factor (PIF) for each department of the entity, and multiplying the projected demand with the (PIF) to compute an adjusted projected demand.
[0032] For example, the acquired asset utilization data per day is used in an algorithm to calculate values of parameters in calculating a prediction of a number of each type of asset 105, 110, and 115 in the entity. According to this example, daily asset utilization data acquired for a particular type of asset 105, 110, and 115 is aggregated to three days, five days, thirty days, and twelve months. The daily utilization, three-day utilization, five-day utilization, thirty-day utilization, and twelve-month utilization are implemented as parameters in the algorithm to predict a future need of the assets 105, 110, and 115. Other additional parameters implemented to predict future needs of assets 105, 110, and 115 include evolution parameters of the assets 105, 110, and 115, financial parameters, commercial parameters, entity growth parameters, etc. The above-described forecast or predicted demands or needs of each asset 105, 110, and 115 or asset type thereof is aggregated for illustration to the user.
[0039] Each node generally represents a mathematical formula of comparison to produce a binary result or value. For example, one embodiment of the input layer of nodes 265 includes a series of nodes 280, 282, 284, 286 each representative of a summation of input values received or acquired for the following group of parameters: one-day average node 280, three-day average node 282, five-day average node 284, thirty-day average node 286 representative of measured parameters of asset utilization. Yet, the input layer of nodes can include additional nodes representative of other parameters of asset utilization (e.g., a three-month average (a90), and one-year average (a356) of measured asset utilization). The result or output communicated from each of the input nodes 280, 282, 284, and 286 is generally representative of a state of the node. One embodiment of the states of the nodes includes binary values 0, 1, and 2. Yet, values of the states of the nodes can vary.
[0040] The embodiment of nodes 290, 292, 294, and 296 comprising the hidden layer of nodes 270 are joined or coupled by a series of connections 300 to receive the output or states of the nodes in the input layer 265. Each connection 300 leading from the input nodes 280, 282, 284, 286 generally represents an assigned empirical value of a weight to be multiplied by each value or state of the input layer node 280, 282, 284, and 286 that the connections 300 leads from. Each node 290, 292, 294, 296 in the hidden layer 270 is also coupled by a connection 300 to itself. An embodiment of each node 290, 292, 294 and 296 of the hidden layer of nodes 270 and the output layer of nodes 275 is representative of a summation of all input values or states correlated to the input nodes 280, 282, 284 and 286 joined by connections 300 thereto multiplied by the above-described empirical values of weights of the joining connections 300 from the respective input nodes 280, 282, 284, 286 to the nodes 290, 292, 294, 296 of the hidden layer of nodes 270 for comparison relative to a predetermined threshold value.
[0041] As a specific example, node 290 represents the following mathematical summation: (node 280*w1+node 282*w2+node 284 w3+node 286 w4+node 290*w5+node 292*w6+node 294*w7+node 296*w8+. . . node n*wn) for comparison to a predetermined threshold, where w1, w2, w3, w4, . . . , wn are assigned empirical values represented by the connections 300 joining the respective input nodes 280, 282, 284, 286, 290, 292, 294, 296, etc. to the hidden layer node 290. The other nodes in the hidden layer and the output layer represent similar mathematical functions or formulas. Alternatively, it should be understood that the nodes can represent other types of mathematical functions, formulas or equations than the subject matter described herein.
Accordingly, in view of FIGs. 4 and 5, input layer of nodes 280-286 represent a summation of input values received or acquired for the following group of parameters representatives of measured parameters of asset utilization. The measured parameters of asset utilization are changeable (paragraphs 0030-0032 and 0039), therefore the input value of layer nodes are changeable. Since the input are changeable, the outputs calculated from the input are changeable, therefore the other node layers received the output from the input nodes are changeable.
Choubey paragraphs 0016-0017, 0025, 0039 and 0040 states:
[0016] The system 100 includes a series of tracking elements 125, 130, and 135 located for each asset 105, 110 and 115, respectively. The tracking elements 125, 130, and 135 are generally operable to create a signal indicative of a location or state of the respective assets 105, 110 and 115. Examples of the tracking elements 125, 130, and 135 can include a geographic positioning system (GPS) receiver in communication with a satellite, electromagnetic receivers and transmitters, radio frequency identification (RFID) tags, radio frequency (rf) transmitters and receivers, bar code, or the like or combination thereof operable to locate a position (e.g., a room location at a facility, a geographic location having a latitude and longitude, a coordinate, etc.) of the respective assets 105, 110, and 115 relative to a reference. The type of technique of tracking (e.g., electromagnetic, optical, global positioning relative to a satellite, etc.) can vary.
[0017] The system 100 further includes a controller 150 in communication with the tracking elements 125, 130, and 135 so as to track movement of the assets 105, 110 and 115 between various states or locations. The communication of the controller 150 with the tracking elements 125, 130, and 135 can be via a wireless connection (e.g., radio frequency, etc.) or wired connection (e.g., communication bus, etc.) or combination thereof to track movement of the series of assets 105, 110 and 115. Communication can be direct, or over an Internet network or an Ethernet network or a local area network (LAN).
[0025] Referring now to FIG. 3, one embodiment of the various states or status indicators of utilization of each asset 105, 110, and 115 at any given time as tracked in act 215 includes a USE status or state 220, a DIRTY state 225, a CLEANING state 230, an INVENTORY state 235, and a SERVICE state 240 of each of the assets 105, 110, and 115. The USE state 220 represents the assets 105, 110, and 115 being utilized by a patient or subject either in a patient room or with the patient transitioning from one point or location to another (e.g., for a walk, to get testing, etc.). The DIRTY state 225 represents the assets 105, 110, and 115 being temporarily stored before being taken to a location of a CLEANING state 230 or, if malfunctioning, to the SERVICE state 240. The CLEANING state 230 represents status of the assets 105, 110, and 115 in the process of being cleaned of contamination or under routine maintenance so as to be available for future utilization according to the USE state 220. The INVENTORY state 235 represents status of the assets 105, 110, and 115 that have previously been moved from the CLEANING state 230 and are now in storage and ready for use in accordance with the USE state 220 described above. The SERVICE state 240 represents status of the assets 105, 110, and 115 after malfunctioning or requiring repair or to be discarded.
[0039] Each node generally represents a mathematical formula of comparison to produce a binary result or value. For example, one embodiment of the input layer of nodes 265 includes a series of nodes 280, 282, 284, 286 each representative of a summation of input values received or acquired for the following group of parameters: one-day average node 280, three-day average node 282, five-day average node 284, thirty-day average node 286 representative of measured parameters of asset utilization. Yet, the input layer of nodes can include additional nodes representative of other parameters of asset utilization (e.g., a three-month average (a90), and one-year average (a356) of measured asset utilization). The result or output communicated from each of the input nodes 280, 282, 284, and 286 is generally representative of a state of the node. One embodiment of the states of the nodes includes binary values 0, 1, and 2. Yet, values of the states of the nodes can vary.
[0040] The embodiment of nodes 290, 292, 294, and 296 comprising the hidden layer of nodes 270 are joined or coupled by a series of connections 300 to receive the output or states of the nodes in the input layer 265. Each connection 300 leading from the input nodes 280, 282, 284, 286 generally represents an assigned empirical value of a weight to be multiplied by each value or state of the input layer node 280, 282, 284, and 286 that the connections 300 leads from. Each node 290, 292, 294, 296 in the hidden layer 270 is also coupled by a connection 300 to itself. An embodiment of each node 290, 292, 294 and 296 of the hidden layer of nodes 270 and the output layer of nodes 275 is representative of a summation of all input values or states correlated to the input nodes 280, 282, 284 and 286 joined by connections 300 thereto multiplied by the above-described empirical values of weights of the joining connections 300 from the respective input nodes 280, 282, 284, 286 to the nodes 290, 292, 294, 296 of the hidden layer of nodes 270 for comparison relative to a predetermined threshold value.
[0042] In a similar manner to that described above, the result or outcome of the comparison at the output node 320 of the output layer 274 is equated to a binary value or state of 0, 1 or 2 of the node. Each binary value or state of the output layer node 320 is correlated or associated to a best mode (e.g., purchase=0 versus lease/rent=1, or other procurement means=2) to procure or acquire the assets 105, 110, and 115.
Accordingly, the states of nodes are changeable between 0, 1, or 2.
Furthermore, Choubey teaches:
[0036] An embodiment of act 255 can further include creating and executing a recurrent neural network classifier algorithm operable to generate the output of the best mode of procurement of assets 105, 110, and 115 for illustration to the operator. The classifier algorithm is configured to produce a binary output or result representative of a best mode of whether to procure the assets 105, 110, and 115 via purchase versus rent/lease. Parameters or factors incorporated in the classifier algorithm to calculate the best mode (e.g., purchase, lease, rent, etc.) to procure the assets 105, 110, and 115 includes price (purchase price versus rent/lease cost), parameter representative of an availability of the asset, buying a parameter representative of a favorability of terms of purchase in comparison to terms of rent/lease, parameter representative of a degree of change in product evolution versus current asset, a parameter representative of a business direction of the entity (e.g., expansion or contraction of budget), change in tax laws, change in inflation, etc.
[0043] Referring to FIG. 5, an embodiment of the act 255 further includes an act 350 of executing a back propagation technique to calculate the empirical values or weights represented by the connections 300. Referred to as training the model of the algorithm, act 355 includes calculating empirical values (e.g., w1, w2, w3, w4, etc.) for the connections 300 in a backward manner or fashion based on acquired predetermined acceptable errors for historical data of input values known to produce known outcomes (illustrated by reference 360) of the best mode or manner to procure assets 105, 110, and 115. The act of training 255 creates or adjusts the algorithm to be consistent with previous/past decisions of the best mode to procure projected demands of assets 105, 110, 115. Calculated values for the weights as determined using the back propagation technique dependent on historical outcomes are then adopted as the current model of the algorithm to calculate or generate outcomes of the manner to procure a projected demand of the assets 105, 110, and 115.
[0046] A technical effect of the system 100 and method 200 described above is to execute a calculation of a need of one or more assets based on historical data of asset utilization, clustering or segmentation of individual entities (third party entities such as hospitals, clinics, etc. of similar infrastructure and willing to share data and analytic output generated using the system and method), and values indicative or representative of a business direction of the entity looking toward the future. Another technical effect includes generating an output for a current state of asset utilization compared to third party entities with similar infrastructure and willing to share asset utilization data and analytic output using the system and method. Yet another technical effect includes generating plans to improve utilization of existing assets, as well as recommending disposal of existing assets and procurement of new assets.
Therefore, Choubey discloses generating data corresponding to a set of layers that are mutable in the NN model (FIG. 4; paragraphs 0039-0043: “input values received or acquired for the following group of parameters”, “result or output communicated from each of the input nodes … is generally representative of a stat of the node”, and “each connection 300 leading from the input nodes 280 … generally represents an assigned empirical value of a weight” [Wingdings font/0xE0] input, state of node and weight are parts of the layer (FIG. 4) and they are changeable), wherein the generated data enables updating the set of layers that are mutable during execution of a binary of the NN model (FIG. 4; paragraphs 0039-0043: “input values received or acquired for the following group of parameters”, “result or output communicated from each of the input nodes … is generally representative of a stat of the nod”, and “each connection 300 leading from the input nodes 280 … generally represents an assigned empirical value of a weight” [Wingdings font/0xE0] input, state of node and weight are parts of the layer (FIG. 4) and they are changeable) of a binary of the NN model (FIG. 4; paragraph 0036-0037: “An embodiment of act 255 can further include creating and executing a recurrent neural network classifier algorithm operable to generate the output of the best mode of procurement of assets 105, 110, and 115 for illustration to the operator.” [Wingdings font/0xE0] the neural network/model are executable algorithm [Wingdings font/0xE0] binary file).
As per Independent claims 11 and 20, the similar responses apply as in claims 1 and 7.
Conclusion
THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication should be directed to examiner Tuan Dao, whose telephone/fax numbers are (571) 270 3387 and (571) 270 4387, respectively. The examiner can normally be reached on every Monday-Thursday and the second Friday of the bi-week from 7:30AM to 5:00PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Pierre Vital, can be reached at telephone number (571) 272 4215.
The fax phone number for the organization where this application or proceeding is assigned is (571) 273 8300.
Any inquiry of a general nature of relating to the status of this application or proceeding should be directed to the TC 2100 Group receptionist whose telephone number is (571) 272 2100.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
/TUAN C DAO/Primary Examiner, Art Unit 2198