DETAILED ACTION This Non-Final Office action is in response to Applicant’s filing on 09/04/2024. Claims 1-20 are pending. The effective filing date of the claimed invention is 03/102/2021. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.— The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim s 2, 3, 9, 10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 2, 3, 9, 10 recite the limitation “favorite” in various lines. This is a relative term that is not sufficiently defined in the Specification on in the claims. See Applicant’s Spec at [0060], where the “favorite CNN filter size information” is mentioned as being with “good calculation efficiency. The description further illustrates the relative nature of the term. This renders the claim indefinite. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claims are found to be directed to abstract idea . Step 1 – Claims 1-17 are device claims; claim 18 is a process claim; and, claims 19-20 are CRM claims. Claims 1-18 pass step 1. Claims 19-20 do not pass Step 1 as they include both transitory and non-transitory embodiments; the examiner recommends amending claims 19-20 to include “a non-transitory CRM” . Appropriate correction is required. Step 2A Prong 1 – Exemplary claim 1 (and similarly claims 1, 8, 17-20) recites the following abstract idea: Transmitting data from one entity to another ( e.g. MPEP 2106.04(a)(2)(III)(A) claim to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016); communicating and transmitting data from one entity to another is a common business practice see MPEP 2106.04(a)(2)(II)(A-B) See Step 2A Prong 2; Step 2B ); Generating a neural network ( see MPEP 2106.04(a)(2)(III) pen and paper, using MPEP 2106.04(a)(2)(I) math ); Processing the neural network ( e.g. MPEP 2106.04(a)(2)(III)(A) claim to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016); see Recentive v. Fox , Fed. Cir 2025, where the claim in Recentive reads on processing a neural network – “See, e.g., ’367 patent, col. 6 ll. 1–5 (requiring “any suitable machine learning technology . . . such as, for example: a gradient boosted random forest, a regression, a neural network, a decision tree, a support vector machine, a Bayesian network, [or] other type of technique”); ’811 patent, col. 3 l. 23 (requiring the application of “any suitable machine learning technique.”).” ). When these limitations are viewed alone and in ordered combination, the examiner finds the independent claims to recite abstract idea . Step 2A Prong 2 – The examiner does not find claims 1, 8, 17-20 to integrate the abstract idea with practical application. The additional elements in claim 1 are transmitting data to a server device; and, the server device generates NN and supplies data. For the server device, the examiner refers to MPEP 2106.05(f) Other examples where the courts have found the additional elements to be mere instructions to apply an exception, because they do no more than merely invoke computers or machinery as a tool to perform an existing process include: ii. Generating a second menu from a first menu and sending the second menu to another location as performed by generic computer components , Apple, Inc. v. Ameranth , Inc., 842 F.3d 1229, 1243-44, 120 USPQ2d 1844, 1855-57 (Fed. Cir. 2016) (emphasis added). Claims 17-18 include an edge device, but this is a standard generic computer used in “apply it” manner. See MPEP 2106.05(f). Step 2B – The examiner does not find the independent claims to include significantly more than the abstract idea itself. The additional limitation analysis from Step 2A Prong 2 is equally applied to Step 2B. Another consideration when determining whether a claim recites significantly more than a judicial exception is whether the additional element(s) are well-understood, routine, conventional (WURC) activities previously known to the industry. This consideration is only evaluated in Step 2B of the eligibility analysis. See MPEP 2106.05(d). For the transmitting of data from one device to another , see MPEP 2106.05(d)(II) The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner ( e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network , e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE , Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) (“Unlike the claims in Ultramercial , the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink.” (emphasis added)); For the device that generates the neural network , see MPEP 2106.05(d)(II), ii. Performing repetitive calculations , Flook , 437 U.S. at 594, 198 USPQ2d at 199 (recomputing or readjusting alarm limit values); Bancorp Services v. Sun Life, 687 F.3d 1266, 1278, 103 USPQ2d 1425, 1433 (Fed. Cir. 2012) (“The computer required by some of Bancorp ’s claims is employed only for its most basic function, the performance of repetitive calculations, and as such does not impose meaningful limits on the scope of those claims.”); Therefore, when viewed alone and in ordered combination, the claims 1, 8, 17-20 are not found to include significantly more, and found to be directed to abstract idea . Dependent Claims – Claims 2, 3, 9, 10 recite more abstract idea including further clarifying the data. E.g. MPEP 2106.04(a)(2)(III). Claims 4 and 11 recite more WURC activity. See MPEP 2106.05(d)(II) finding transmitting data over a network to be WURC. Claim 5 , 6 recite more abstract idea. See Elec. Power Group . Claim 7 recites more abstract idea. See e.g. Recentive v. Fox . Claims 12-16 recite more abstract idea. See Recentive and MPEP 2106.04(a)(2)(A) organizing information and manipulating the results. All claims are found to be directed to abstract idea. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 2, 8, 9, 15, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pat. Pub. No. 2019/0318268 to Wang et al. (“Wang”) in view of U.S. Pat. Pub. No. 2020/0272899 to Dunne et al. (“Dunne”). With regard to claims 1, 8, 17-20 , Wang discloses the claimed information processing device (see Wang [0027] [0033] describes the distributed computing system with edge nodes and synchronization nodes, edge nodes can include sensors, gateways, micro-servers, or wireless communication access nodes ) comprising : a transmitting section (e.g. Wang [0071] At block 312 , the edge node 304 sends the model parameter m.sub.i and the resource parameter set P.sub.i to the synchronization node 302 .) that transmits, to a server device ( Wang Fig. 5 and associated text, describes transmitting information from the edge node to a synchronization node, which functions as a central server in the distributed architecture , the edge node sends the model parameter and the resource parameter set to the synchronization node. The synchronization node receives information from multiple edge nodes and updates the global model ) that generates a neural network ( see Wang [0058] where the machine learning referred to in Wang can be a neural network; Wang does not disclose where the server/central computer generates and sends a neural network; Dunne teaches e.g. abstract, [0112] that the centralized device generates and transmits/sends the neural network, Dunne [0112] In operation block 614 , the centralized site/device 604 may train a neural network solution on the labelled data. In operation 616 , the centralized site/device 604 may send the neural network to the edge device 602 ., Dunne [0213] ) , information related to a processing capability (see Wang [0070] At block 310 , edge node 304 performs local iterations of a training process to produce a new model parameter m.sub.i (which may be equivalent to w.sub.i (t)) and an estimation of a resource parameter set P.sub.i . The resource parameter set P.sub.i may include, for example, metrics including two or more of: available computation resources (e.g., available CPU cycles) at the edge node 304 ; available communication resources (e.g., bandwidth, delay) between the edge node 304 and the synchronization node 302 ; data distribution at the edge node 304 (e.g., capturing the similarity/difference of a local dataset at the edge node 304 from the collection of all local datasets at all edge nodes 304 ), which can be estimated based on a function computed on the gradients computed at different edge nodes 304 ; and, noisiness of the data at the edge node 304 (such as the statistical divergence of data samples).) for processing the neural network supplied from the server device (see e.g. Wang [0070]) . Therefore, it would have been obvious to one of ordinary skill in the edge node art before the effective filing date of the claimed invention to modify Wang’s system that already transmits resource capability information of the edge node to the central node (shown above), to include Dunne’s ability to deploy neural network from a central device to edge devices, where using the edge resource information of Wang to inform which neural network to deploy from Dunne’s centralized training system would improve model deployment efficiency and ensure that the neural network is compatible with the computational resources of the edge device. See also Dunne, [0213] the advantage of “train the large data volume parts of the stratified neural network (e.g., feature identification layers, higher numerical precision, multiple partial layers that are not partitioned or cross-connected, etc.) on data from similar sensors, and train the small data parts of the neural network (e.g., fully connected layers, lower numerical precision, cross-connected weights between partial layers, etc.) on data collected from similar or related sensors (e.g., data from similar deployment environments but not necessarily from similar sensors, etc.). The centralized site/device may select an ensemble of lightweight neural networks, and train the entire neural network based on the labelled training data.” See also Dunne, [0062] In the above example, the accuracy of the inference operations could be improved by sending data captured in the deployment location to the centralized site/device for additional training. With regard to claims 2, 9 , Wang further discloses the information related to the processing capability includes at least one of capacity information of the neural network, filter size information of a favorite convolutional neural network, hardware architecture type information, chip information, and device model number information (Wang [007 3- 4] Fig. 3 ) . With regard to claim 15 , Dunne teaches [0132] additionally train or re-train the neural network in the centralized site/device 604 . See combination above. Claim(s) 3, 10 are rejected under 35 U.S.C. 103 as being unpatentable over Wang, Dunne, and Cai et al. (2020). Once-For-All: train one network and specialize it for efficient deployment . Retrieved from: arXiv:1908.09791v5 [ cs.LG ] 29 Apr 2020 (“OFA”) With regard to claims 3, 10 , Wang further discloses the information related to the processing capability includes all of capacity information of the neural network (Wang [0073-4] Fig. 3 ; see OFA abstract, (<600M MACs which is direct model capacity/computational-budget type disclosure ), filter size information of a favorite convolutional neural network (Wang does not disclose ; see OFA abstract, a generalized pruning method that reduces the model size across many more dimensions than pruning (depth, width, kernel size, and resolution)), hardware architecture type information (Wang does not disclose ; see OFA at e.g. page 1, Introduction, discussing diverse hardware platforms ), chip information (Wang does not disclose ; OGA discusses at abstract the use of deployment of diverse hardware platforms including DSP-track results and many device-specific pretrained models ), and device model number information (Wang does not disclose ; OFA teaches where the pre-trained models are for many devices and many latency constraints, where including an identifier of the model of device for these device specific models of the pretrained models would be predictable use of known elements ). Therefore, it would have been obvious to one of ordinary skill in the edge device art to include the ability to identify these elements are in doing so allows the system to tailer device specific configurations, as shown in OFA at e.g. abstract, allowing further customization for specific devices and requirements. Claim(s) 4, 11 are rejected under 35 U.S.C. 103 as being unpatentable over Wang, Dunne, and U.S. Pat. Pub. No. 2021/0295094 to Lee et al. (“Lee”). With regard to claims 4, 11 , Wang still disclose the edge device transmitting capability/resource information upstream. However, Wang does not disclose the rest of the claim directed to custom workflows using multiple cameras and expressed tied to the NICE standard. See Lee at Fig. 1B, [0032] and associated text described the use of NICE standard. Therefore, it would have been obvious to one of ordinary skill in the edge computing devices art to include such NICE standard processing, as is shown in Lee at [0032], “The workflow sorts and processes the sensor data and makes it more searchable. It also presents the results to the end users or other applications, which can then analyze the processed data more easily than raw sensor data.” Claim(s) 5-7 and 12 , 16 are rejected under 35 U.S.C. 103 as being unpatentable over Wang, Dunne, in view of U.S. Pat. Pub. No. 2020/0034710 to Sidhu et al. (“Sidhu”) . With regard to claim 5, 12 , 16 , Wang still provides the broader edge-to-server reporting framework. Wang does not disclose the rest of the claim. Dunne teaches the centralized site/device trains or updates a neural network and sends it to the edge device, and the edge device executes it (abstract, and throughout). The remaining gap is the explicit claim requirement that the edge device measures a processing time of the received neural network, and transmits that processing time to the server device. The third reference Sidhu teaches at abstract, [0059-70] abstract, a performance evaluator for a neural-network model that measures runtime performance on a target embedded processor, including measuring the performance of the model by profiling machine code (Sidhu e.g. [0059]), determining latency in completing model operations ( Sidhu e.g. [0060] ), determining throughput/frame rate ([0059-70]); instructing the embedded processor to execute the model and empirically measuring the frame rate/latency (e.g. [0054] [0060] [0061] “ the performance evaluator 280 instructs the embedded processor to execute the operations implementing the model and empirically measures a frame rate at which the processor completes the operations implementing the model. ”). Therefore, it would have been obvious to one of ordinary skill in the art to include such measuring aspects, as found in Sidhu, and then send this measurement upstream to devices as shown in Wang for upstream reporting concept. The advantage of the combination is that the server could choose or update a suitable model for the edge device at this point. With regard to claim 6 , Dunne teaches this at abstract, [0003-6] and throughout. See combination above. With regard to claim 7 , Dunne teaches this at e.g. [0124] [0125] [0132] etc. See combination above. Claim(s) 13, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Wang, Dunne, in view of U.S. Pat. Pub. No. 2021/0081763 to Abdelfattah (“Ab”). With regard to claim 13 and 14 , Wang provides the edge/server feedback architecture. Dunne provides the server - side regenerate/update and resend loop, as the centralized site device trains or updates the neural network and sends updated neural network information back to the edge device . Ab provides the missing target/latency / hardware criteria decision rule and the repeat with another network concept when the criterion is not met. See Ab at [0056] [0171] [0174]. Ab teaches if the estimated latency exceeds the hardware reference, the device determines that the criteria are not satisfied and then selects one or the other neural networks instead. See Ab [0083] [0089] [0165] etc. For claim 14 see Ab at [0075] [0093] “ The electronic device 100 may repeat the processor described above until the optimal CNN and FPGA pair are selected. ” Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Wang to include such features as the benefit is higher accuracy and higher performance. See Ab [0165]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT Peter Ludwig whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-5599 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Mon-Fri 9-5 . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Fahd Obeid can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT 571-270-3324 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PETER LUDWIG/ Primary Examiner, Art Unit 3627