DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Drawings The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the: FILLIN "Enter features that must be shown" \* MERGEFORMAT claim 7 & 20 limitation of "determine a type of fuel dispensed based on the audio data." must be shown or the feature(s) canceled from the claim(s). No new matter should be entered. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claims FILLIN "Enter claim indentification information" \* MERGEFORMAT 5 & 18 objected to because of the following informalities: Claim s FILLIN "Enter claim identification information" \* MERGEFORMAT 5 & 18 in FILLIN "Enter appropriate information" \* MERGEFORMAT line 3 and lines 5-6 & line 3 and line 6 (respectively) recites the limitation " FILLIN "Enter appropriate information" \* MERGEFORMAT the stages of refueling operation s ". There is insufficient antecedent basis for this limitation in the claim. Claim 1 & claim 14 line 4 & lines 3-4 (respectively) states “associated with stages of a refueling operation at a fueling station” in the singular; therefore, there would be antecedent basis for "the stages of a refueling operation". Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b ) CONCLUSION.— The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the appl icant regards as his invention. Claim s FILLIN "Enter claim indentification information" \* MERGEFORMAT 1-20 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding “Failure to particularly point out & distinctly claim [indefinite]”: Claims FILLIN "Enter claim identification information" \* MERGEFORMAT 1, 8, & 14 in lines 1-2 (each claim) recites the limitation " FILLIN "Enter appropriate information" \* MERGEFORMAT quantity of fuel dispensed at a fueling station ". For the independent claims at least, it is unclear whether the tanks belonging to the fueling station are being refilled or if the tanks of vehicles are being refilled. For the purposes of examination and based at least on Fig. 4A, it is assumed that the tanks belonging to the fueling station are being refilled . Claims FILLIN "Enter claim identification information" \* MERGEFORMAT 7 & 20 in line 3 (both claims) recites the limitation " wherein the machine learning model is configured to, upon execution: determine a type of fuel dispensed based on the audio data. ". It is not clear what means is applied to “determine a fuel type”, nor what physical elements such as sensors would be used to make such determinations . One possible means could be ‘use a speaker to determine whether a tank for a particular type of fuel is less full by measuring the depth of the fuel’. A second possibility would be ‘use a speaker to excite oscillations in the fuel then wavelength with amplitude data could provide density and therefore type of fuel’. One of ordinary skill in the art would not know which means was intended to be the inventive concept of the application. Claims FILLIN "Enter claim identification information" \* MERGEFORMAT 6 & 19 in FILLIN "Enter appropriate information" \* MERGEFORMAT line 4 (both claims) recites the limitation " executing the machine learning model on the image data ". However, there was no previous claim limitation directed towards having trained the machine learning model on image data. Until this point in the claims (all parent claims) the machine learning model is directed towards recognizing audio signals and not video signals and as such the machine learning model would not be able to correctly interpret video signals. Regarding “Lack of antecedent basis in the claims”: Claim s FILLIN "Enter claim identification information" \* MERGEFORMAT 2-7 in line 1 (each claim) recites the limitation " FILLIN "Enter appropriate information" \* MERGEFORMAT The method of claim 1[2] ". There is insufficient antecedent basis for this limitation in the claim. There are multiple limitations within claim 1 which could be referred to as “the method of claim 1”, such as “generating audio data from one or more microphones” or “executing a machine learning model on the audio data”, or etc. There would be sufficient antecedent basis for “The method of determining a quantity of fuel dispensed of claim 1 [2] ”. Claim s FILLIN "Enter claim identification information" \* MERGEFORMAT 9-13 in line 1 (each claim) recites the limitation " The system of claim 8 [ 9][11 ] ". There is insufficient antecedent basis for this limitation in the claim. There are multiple limitations within claim 8 which could be referred to as “the system of claim 8”, such as “ microphone installed at a fueling station ” or “ memory having instructions ”, or etc. There would be sufficient antecedent basis for “The system for training a machine learning model of claim 8[9][11] ”. Claim s FILLIN "Enter claim identification information" \* MERGEFORMAT 15-20 in line 1 (each claim) recites the limitation " The system of claim 14[15 ] ". There is insufficient antecedent basis for this limitation in the claim. There are multiple limitations within claim 14 which could be referred to as “the system of claim 14”, such as “microphone configured to generate audio data ” or “ processor programmed to execute a machine learning model ”, or etc. There would be sufficient antecedent basis for “The system for determining a quantity of fuel of claim 1 4[15] ”. Claim FILLIN "Enter claim identification information" \* MERGEFORMAT 8 in line 12 recites the limitation " FILLIN "Enter appropriate information" \* MERGEFORMAT the segmentations ". There is insufficient antecedent basis for this limitation in the claim. There would be sufficient antecedent basis for ‘the segmentation’. Note : the first instance of an element should be in the form “a [unique descriptive terminology]” and successive references to that element should be in the form “the [unique descriptive terminology]” where [unique descriptive terminology] is the same throughout the claims. This is necessary because similarly phrased elements can be patentably distinct. The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims FILLIN "Enter claim identification information" \* MERGEFORMAT 7 & 20 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims FILLIN "Enter claim identification information" \* MERGEFORMAT 7 & 20 in line 3 (both claims) recites the limitation " wherein the machine learning model is configured to, upon execution: determine a type of fuel dispensed based on the audio data. ". The initially filed specifications does not disclose a means for “determine a type of fuel dispensed based on the audio data”. One possible means could be ‘use a speaker to determine whether a tank for a particular type of fuel is less full by measuring the depth of the fuel using time of an echo ’. A second possibility would be ‘use a speaker to excite oscillations in the fuel , then wavelength with amplitude data could provide density and therefore type of fuel’. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Flow diagrams from MPEP 2106(III) & MPEP 2106.04(II)(A), respectively. Claims FILLIN "Pluralize the word 'Claim' if necessary and then identify the claim(s) being rejected." 1-20 rejected under 35 U.S.C. 101 because : Claim 1: Step Analysis Step 1: “Is the claim to a process, machine, manufacture, or composition of matter? Yes; The claim is directed towards “a method of determining a quantity of fuel dispensed at a fueling station based on audio” which is a process and within one of the four statutory categories. Revised Step 2A Prong One: “Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes; The claim recites: “ and executing a machine learning model on the audio data , wherein the machine learning model is configured to, upon execution: ” “ segment the audio data into segments, wherein each segment is associated with a respective one of the stages of the refueling operation; ” “ determine that a first segment of the segments includes audio associated with a fuel flow stage of the refueling operation in which fuel is dispensed; ” “ determine a length of time of the first segment; ” “ and determine a quantity of fuel dispensed based on the length of time of the first segment. ” Explanation: Rule: See MPEP 2106.04(a)(2) (III)(C): “In evaluating whether a claim that requires a computer recites a mental process, examiners should carefully consider the broadest reasonable interpretation of the claim in light of the specification. For instance, examiners should review the specification to determine if the claimed invention is described as a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept. In these situations, the claim is considered to recite a mental process.” See MPEP 2106.04(a)(2) (I): “ The mathematical concepts grouping is defined as mathematical relationships, mathematical formulas or equations, and mathematical calculations. ” & “It is important to note that a mathematical concept need not be expressed in mathematical symbols, because "[w] ords used in a claim operating on data to solve a problem can serve the same purpose as a formula.” Analysis: These limitations (at least under the broadest reasonable interpretation) are directed towards operations (either mathematical or mental) applied to data. As such, these limitations are directed towards the abstract idea groupings of either ‘mental processes’ or ‘ mathematical concepts ’ . Conclusion: Therefore, the claim recites an abstract idea, law of nature, or natural phenomenon . Revised Step 2A Prong Two : “Does the claim recite additional elements that integrate the judicial exception into a practical application?” No; The additional element(s)/limitation(s) of: “ generating audio data from one or more microphones, wherein the audio data is associated with stages of a refueling operation at a fueling station; ” Are extra-solution activity. Explanation: Rule: See MPEP 2106.05(g) : “The term "extra-solution activity" can be understood as activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim. Extra-solution activity includes both pre-solution and post-solution activity. An example of pre-solution activity is a step of gathering data for use in a claimed process,” & “(3) Whether the limitation amounts to necessary data gathering and outputting, (i.e., all uses of the recited judicial exception require such data gathering or data output).” Analysis: The judicial exception(s) directed towards “executing a machine learning model on the audio data” necessarily require the “generating audio data from one or more microphones”. Therefore, this limitation is not significantly more than the judicial exception(s). Conclusion: Therefore, it is not the case that the claim recites “ additional elements that integrate the judicial exception into a practical application ” . Step 2B: “Does the claim recite additional elements that amount to significantly more than the judicial exception?” No; The additional element(s)/limitation(s) as listed in Revised Step 2A Prong Two are well known conventional subject matter to one of ordinary skill in the art. Explanation: Rule: See MPEP 2106.05(d)(I): “ 2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. ” & “(c) A citation to a publication that demonstrates the well-understood, routine, conventional nature of the additional element(s);” Analysis: 1) US 20020157469 A1 “ Device For Checking The Quantity Of Gasolines, Diesel Fuels, Fuels Or Liquids In General During Introduction In A Tank ” ( Cilia ) see Fig. 1-4: “ ultrasound emission source ” 2) US 6629457 B1 “ Device For Measuring A Fill Level Of A Liquid In A Container ” ( Keller ) see Fig. 1-8: “ ultrasonic sensor ” 3) US 12327571 B2 “Systems And Methods For Diagnosing Equipment” (Ramaiah) see Fig. 4-420: “Extract Features From The Audio File” & Fig. 4-430: “Input the Extracted Features Into a Machine Learning Model” Conclusion : Therefore the claim does not recite additional elements that amount to significantly more than the judicial exception. Conclusion: Therefore, “Claim is not eligible subject matter under 35 USC 101”. Claim 2 : Step Analysis Step 1: “Is the claim to a process, machine, manufacture, or composition of matter? Yes; The claim is directed towards “The method of claim 1” which is a process and within one of the four statutory categories. Revised Step 2A Prong One: “Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes; The claim recites : the judicial exception(s) as inherited from claim 1. The claim additionally recites: “ wherein the machine learning model is further configured to upon execution : ” “ determine that a second segment of the segments includes audio associated with a fuel truck approaching the fueling station; ” “ determine that a third segment of the segments includes audio associated with a grounding of the fuel truck; ” “ and determine that a fourth segment of the segments includes audio associated with the fuel truck leaving the fueling station. ” Explanation: These limitations are directed towards computations or reasoning done by a machine learning model . T hese limitations are at least under the broadest reasonable interpretation , limitations which could be done by the human mind and are “ merely using a computer as a tool to perform the concept. ” Revised Step 2A Prong Two : “Does the claim recite additional elements that integrate the judicial exception into a practical application?” No; The claim does not recite additional elements beyond those listed in step 2A Prong One. Step 2B: “Does the claim recite additional elements that amount to significantly more than the judicial exception?” No; The claim does not recite additional elements beyond those listed in step 2A Prong One. Conclusion: Therefore, “Claim is not eligible subject matter under 35 USC 101”. Claim 3: Step Analysis Step 1: “Is the claim to a process, machine, manufacture, or composition of matter? Yes; The claim is directed towards “The method of claim 1” which is a process and within one of the four statutory categories. Revised Step 2A Prong One: “Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes; The claim recites: The judicial exception(s) as inherited from claim 2 and thereby from claim 1. The claim additionally recites: “ wherein the machine learning model is further configured to, upon execution , ” “ determine that the first segment of the segments includes audio associated with the fuel flow stage based upon (1) the determination that the second segment of the segments includes audio associated with a fuel truck approaching the fueling station, and (2) the determination that third segment of the segments includes audio associated with a grounding of the fuel truck. ” Explanation: These limitations are directed towards computations or reasoning done by a machine learning model . T hese limitations are at least under the broadest reasonable interpretation , limitations which could be done by the human mind and are “ merely using a computer as a tool to perform the concept. ” Revised Step 2A Prong Two : “Does the claim recite additional elements that integrate the judicial exception into a practical application?” No; The claim does not recite additional elements beyond those listed in step 2A Prong One. Step 2B: “Does the claim recite additional elements that amount to significantly more than the judicial exception?” No; The claim does not recite additional elements beyond those listed in step 2A Prong One. Conclusion: Therefore, “Claim is not eligible subject matter under 35 USC 101”. Claim 4 : Step Analysis Step 1: “Is the claim to a process, machine, manufacture, or composition of matter? Yes; The claim is directed towards “The method of claim 1” which is a process and within one of the four statutory categories. Revised Step 2A Prong One: “Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes; The claim recites: The judicial exception(s) as inherited from claim 1. The claim additionally recites: “ wherein the machine learning model is further configured to, upon execution : ” “ compare the quantity of fuel dispensed to a logged amount of fuel dispensed; ” Explanation: These limitations are directed towards computations or reasoning done by a machine learning model . T hese limitations are at least under the broadest reasonable interpretation , limitations which could be done by the human mind and are “ merely using a computer as a tool to perform the concept. ” Revised Step 2A Prong Two : “Does the claim recite additional elements that integrate the judicial exception into a practical application?” No; The claim additionally recites: “ and output an alert if a difference between the quantity of fuel dispensed and a logged amount of fuel dispensed exceeds a threshold. ” Explanation: This limitation is necessary outputting of data from the judicial exception(s), and is necessarily implied by the judicial exception(s). This limitation is insignificant extra-solution (post-solution) activity (see MPEP 2106.05(g) : “ Insignificant application ”) . Step 2B: “Does the claim recite additional elements that amount to significantly more than the judicial exception?” No; The claim does not recite additional elements beyond those addressed in step 2A Prong Two . Conclusion: Therefore, “Claim is not eligible subject matter under 35 USC 101”. Claim 5 : Step Analysis Step 1: “Is the claim to a process, machine, manufacture, or composition of matter? Yes; The claim is directed towards “The method of claim 1” which is a process and within one of the four statutory categories. Revised Step 2A Prong One: “Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes; The claim recites: The judicial exception(s) as inherited from claim 1. The claim additionally recites: “ and training the machine learning model based on the training audio data and the annotations to determine audio events associated with the fuel flow stage of the refueling operation in which fuel is dispensed. ” Explanation: These limitations are directed towards computations or reasoning done by a machine learning model . Training a machine learning model is within the abstract idea grouping of mathematical concepts; supplying data and creating a mathematical model which fits that data is math. Revised Step 2A Prong Two : “Does the claim recite additional elements that integrate the judicial exception into a practical application?” No; The claim additionally recites: “ receiving training audio data , wherein the training audio data is associated with the stages of refueling operations at a fueling station; ” “ receiving annotations on the training audio data , wherein the annotations include labeling of audio events in the audio data corresponding to the stages of refueling operations; ” Explanation: Th ese limitation s are necessary data gathering for the judicial exception(s), and are necessarily implied by the judicial exception(s). This limitation is insignificant extra-solution (p re -solution) activity. Step 2B: “Does the claim recite additional elements that amount to significantly more than the judicial exception?” No; The claim does not recite additional elements beyond those addressed in step 2A Prong Two . Conclusion: Therefore, “Claim is not eligible subject matter under 35 USC 101”. Claim 6 : Step Analysis Step 1: “Is the claim to a process, machine, manufacture, or composition of matter? Yes; The claim is directed towards “The method of claim 1” which is a process and within one of the four statutory categories. Revised Step 2A Prong One: “Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes; The claim recites: The judicial exception(s) as inherited from claim 1. The claim additionally recites: “ and executing the machine learning model on the image data , wherein the machine learning model is configured to , upon execution: ” “ identify a fuel truck in the image data , ” “ and verify that the first segment of the segments includes audio associated with a fuel flow stage of the refueling operation based on the fuel truck identified in the image data . ” Explanation: These limitations are directed towards computations or reasoning done by a machine learning model . These are mental processes ( see MPEP 2106.04(a)(2)(III)(C) : “ 2. Performing a mental process in a computer e nvironment ”). Revised Step 2A Prong Two : “Does the claim recite additional elements that integrate the judicial exception into a practical application?” No; The claim additionally recites: “ generating image data from one or more cameras, wherein the image data is associated with the refueling operation at the fueling station; ” Explanation: These limitations are necessary data gathering for the judicial exception(s), and are necessarily implied by the judicial exception(s). This limitation is insignificant extra-solution (pre-solution) activity. Step 2B: “Does the claim recite additional elements that amount to significantly more than the judicial exception?” No; The claim does not recite additional elements beyond those addressed in step 2A Prong Two . Conclusion: Therefore, “Claim is not eligible subject matter under 35 USC 101”. Claim 7 : Step Analysis Step 1: “Is the claim to a process, machine, manufacture, or composition of matter? Yes; The claim is directed towards “The method of claim 1” which is a process and within one of the four statutory categories. Revised Step 2A Prong One: “Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes; The claim recites: The judicial exception(s) as inherited from claim 1. The claim additionally recites: “ wherein the machine learning model is configured to, upon execution : ” “ determine a type of fuel dispensed based on the audio data . ” Explanation: These limitations are directed towards computations or reasoning done by a machine learning model . These are mental processes ( see MPEP 2106.04(a)(2)(III)(C) : “ 2. Performing a mental process in a computer e nvironment ”). Revised Step 2A Prong Two : “Does the claim recite additional elements that integrate the judicial exception into a practical application?” No; The claim does not recite additional elements/limitations beyond those addressed in step 2A Prong One Step 2B: “Does the claim recite additional elements that amount to significantly more than the judicial exception?” No; The claim does not recite additional elements/limitations beyond those addressed in step 2A Prong One Conclusion: Therefore, “Claim is not eligible subject matter under 35 USC 101”. Claims 8-20 are rejected for similar reasons as claims 1-7. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) FILLIN "Insert the claim numbers which are under rejection." \d "[ 1 ]" 1-5, 7-18, & 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over FILLIN "Insert the prior art relied upon." \d "[ 2 ]" US 20100023162 A1 "Method, System And Components For Operating A Fuel Distribution System With Unmanned Self-Service Gasoline Stations" ( Gresak ) in view of FILLIN "Insert the prior art relied upon." \d "[ 2 ]" US 20220299354 A1 "Device, System and Method for Determining the Fill Level of a Container" ( Cunnah ) . Regarding claim 1 , Gresak teaches a method of determining a quantity of fuel dispensed at a fuel ing station (Fig. 1A-28: “gasoline or service station”, fueling station /( “service station”)) Gresak does not as explicitly teach based on audio, the method comprising: generating audio data from one or more microphones, wherein the audio data is associated with stages of a refueling operation at a fueling station; and executing a machine learning model on the audio data, wherein the machine learning model is configured to, upon execution: segment the audio data into segments, wherein each segment is associated with a respective one of the stages of the refueling operation; determine that a first segment of the segments includes audio associated with a fuel flow stage of the refueling operation in which fuel is dispensed; determine a length of time of the first segment; and determine a quantity of fuel dispensed based on the length of time of the first segment. Cunnah teaches based on audio, the method comprising: generating audio data from one or more microphones (Fig. 1-107: “microphone”) , wherein the audio data is associated with stages of a refueling operation at a fueling station (para 0001: “The container may be a container for containing contents such as a liquefied gas or beer, but the invention is not limited to this.”) ; and executing a machine learning model on the audio data, wherein the machine learning model is configured to, upon execution: segment the audio data into segments, wherein each segment is associated with a respective one of the stages of the refueling operation (para 0017: “The method may comprise a step of normalising the usage profile and the capture time of the acoustic responses of each container with respect to time to form the test data. The computing step may utilise a machine learning algorithm .”, usage profile would include segmenting (or stages) over time) ; determine that a first segment of the segments includes audio associated with a fuel flow stage of the refueling operation in which fuel is dispensed ( para 0015: “The method may further comprise obtaining a usage end time and determining the likely fill level for each acoustic response using the usage start time and the usage end time. The usage end time may be inferred from location information, or may be inferred by determining a final fill level before refilling the container” ) ; determine a length of time of the first segment (para 0015: “using the usage start time and the usage end time”) ; and determine a quantity of fuel dispensed based on the length of time of the first segment (para 0015: “determining the likely fill level for each acoustic response” , based on length of time(“multiple acoustic responses” ) . It would have been obvious to one of ordinary skill in the relevant art before the effective filing date of the claimed invention to have modified the method taught by Gresak with the teachings of Cunnah . One would have added to the “Method, System And Components For Operating A Fuel Distribution System With Unmanned Self-Service Gasoline Stations” of Gresak the “Device, system and method for determining the fill level of a container” of Cunnah . The motivation would have been that the unmanned self-service gasoline station would need a means of measuring the amount of fuel in tanks without requiring employees, otherwise it wouldn’t be unmanned (see Cunnah para 0140: “Another benefit to this approach is the ability to obtain a result from a given tank or cylinder from a remote location. This enables a business to remotely monitor the fill levels of a large number of tanks or cylinders simultaneously and without any further assistance .”). Regarding claim 2 , Gresak in view of Cunnah teaches t he method of claim 1, Cunnah further teaches wherein the machine learning model is further configured to, upon execution: determine that a second segment of the segments includes audio associated with a fuel truck approaching the fuel ing station (para 0020: “The usage start time may be determined from either a time of delivery of the container to a customer, or a time of fill or refill of the container (logistics information). Similarly, the usage end time may be determined from either a time of return of the container to the supplier or a time of refill of the container (logistics information).”, machine learning system recognizes a time when refilling begins as associated with arrival of the truck ) ; determine that a third segment of the segments includes audio associated with a grounding of the fuel truck (para 0082: “For example, the communications device 120 may continue to operate as a receiver during the dormant mode, and be responsive to a trigger signal to wake up the transducer device to take measurements”, system would recognize sounds associated with a refueling truck and with grounding as indicative that the system should wake up) ; and determine that a fourth segment of the segments includes audio associated with the fuel truck leaving the fuel ing station (para 0086: “the communications device 120 may continue to operate as a receiver during the dormant mode, and be responsive to a trigger signal to wake up the transducer device to take measurements. Alternatively, if measurements are to be taken periodically, a timer (not shown, but potentially implemented as part of the communications device 120 or computing device 110) may function during the dormant mode, and may trigger the transducer device to wake up when a measurement is due to be taken.”, system would associate sounds of truck leaving as a signal to switch to a dormant state) . Regarding claim 3 , Gresak in view of Cunnah teaches the method of claim 2, Cunnah further teaches wherein the machine learning model is further configured to, upon execution, determine that the first segment of the segments includes audio associated with the fuel flow stage (para 0130: “The same principles of non-use detection apply for acoustic responses i and j. It can also be seen that a usage end time can be determined as the first acoustic response which matches an acoustic response obtained at time F just before refilling the container .”) based upon (1) the determination that the second segment of the segments includes audio associated with a fuel truck approaching the fuel ing station (para 0020: “The usage start time may be determined from either a time of delivery of the container to a customer, or a time of fill or refill of the container (logistics information). Similarly, the usage end time may be determined from either a time of return of the container to the supplier or a time of refill of the container (logistics information).”, machine learning system recognizes a time when refill begins as associated with arrival of the truck) , and (2) the determination that third segment of the segments includes audio associated with a grounding of the fuel truck (para 0082: “For example, the communications device 120 may continue to operate as a receiver during the dormant mode, and be responsive to a trigger signal to wake up the transducer device to take measurements”, system would recognize sounds associated with a refueling truck and with grounding as indicative that the system should wake up) . Regarding claim 4 , Gresak in view of Cunnah teaches the method of claim 1, Gresak further teaches and output an alert (para 0139-0142: “The commander server gathers information from the measuring systems 31, 76, 74 and other gasoline station equipment 72 and provides different information to the users, including: … fuel volume, normalized fuel volume (at 15.degree . C.) and time stamped last measurement, filling detection, alarming (IFSF compliant: Overfill status, Underfill status, Supply warning , High-High level alarm, High level alarm, Low-Low level alarm, Low level alarm , High water alarm, high sediment alarm), and measurement history.”) if a difference between the quantity of fuel dispensed and a logged amount of fuel dispensed exceeds a threshold (para 0061: “The tank level gauge can support tank probes, leakage sensors and level switches. Tank probe responses can be constantly analyzed by an algorithm, detect dirt, possible measurement noise and other anomalies,”, if the fuel level is different from a threshold then there must be leakage ) . Cunnah further teaches wherein the machine learning model is further configured to, upon execution: compare the quantity of fuel dispensed to a logged amount of fuel dispensed (para 0094: “The registration of the gas cylinder as full may take place at the time of refilling, at the time of dispatch to a customer, or at any time in between. The database system may be a dedicated database used to implement the present technique, or an existing database operating by the supplier for other purposes. The information may typically be obtained through either existing logistics records and/or from the tracking capability on the device itself.”, system logs/(“logistics”) how full gas cylinders are) ; Regarding claim 5 , Gresak in view of Cunnah teaches the method of claim 1, Cunnah further teaches further comprising: receiving training audio data, wherein the training audio data is associated with the stages of refueling operations (para 0045: “a method for the training of a computer software model for a stimulus driven fill level measurement system for a container, such as a tank or cylinder, is provided. In summary, a container is agitated and the corresponding frequency response is obtained. The analysis of the resulting response is combined with responses corresponding to different, but unknown, levels within a given container.”, system is trained on data about containers at different stages/( times including “ fill levels ”) ) at a fuel ing station; receiving annotations on the training audio data, wherein the annotations include labeling of audio events in the audio data corresponding to the stages of re fuel ing operations (para 0130: “The same principles of non-use detection apply for acoustic responses i and j. It can also be seen that a usage end time can be determined as the first acoustic response which matches an acoustic response obtained at time F just before refilling the container .”, times corresponding to “just before” and to “end time” segment (or identify stages) stages of refueling) ; and training the machine learning model based on the training audio data and the annotations to determine audio events associated with the fuel flow stage of the refueling operation in which fuel is dispensed (para 0118: “The above explanation is intended to indicate the general principles of using machine learning to convert training data (in this case a set of acoustic profiles and associated estimated fill levels) into a model which is substantially optimised to be able to convert further acoustic profiles into fill levels.”, training a machine learning model requires providing data which is labeled with annotations for determining event types/(including “estimated fill levels”)) . Regarding claim 7 , Gresak in view of Cunnah teaches the method of claim 1, Cunnah further teaches wherein the machine learning model is configured to, upon execution: determine a type of fuel dispensed based on the audio data (para 0010: “According to one aspect, there is provided a method of configuring a container model, the container model defining a relationship, for a particular type of container and/or contents, between characteristics of an acoustic response and an associated fill level,”, fuel type/(“contents”)). Regarding claim 8 , Gresak teaches a system for training a machine learning model to determine a quantity of fuel dispensed at a fuel ing station (Fig. 1A-28: “gasoline or service station”, fueling station /( “service station”)) Gresak does not as explicitly teach based on audio, the system comprising: a microphone installed at a fuel ing station, wherein the microphone is configured to generate audio data associated with stages of a re fuel ing operation occurring at the fuel ing station; a processor; and memory having instructions that, when executed by the processor, cause the processor to: receive annotations associated with the audio data from an annotator, wherein the annotations include a segmentation of the audio data with labels, wherein each label is associated with a respective stage of the re fuel ing operation; provide, as training data, the segmentations of the audio data and the labels to a machine learning model; train the machine learning model to identify the stages of the refueling operation based on the training data; and output a trained machine learning model configured to identify the stages of the refueling operation based on audio. Cunnah teaches based on audio, the system comprising: a microphone (Fig. 1-107: “microphone”) installed at a fuel ing station, wherein the microphone is configured to generate audio data associated with stages of a re fuel ing operation occurring at the fuel ing station (para 0001: “The container may be a container for containing contents such as a liquefied gas or beer, but the invention is not limited to this.”); a processor (Fig. 1-110: “Computing Device”); and memory having instructions (Fig. 1-110: “Computing Device”) that, when executed by the processor, cause the processor to: receive annotations associated with the audio data from an annotator, wherein the annotations include a segmentation of the audio data with labels, wherein each label is associated with a respective stage of the re fuel ing operation(para 0130: “The same principles of non-use detection apply for acoustic responses i and j. It can also be seen that a usage end time can be determined as the first acoustic response which matches an acoustic response obtained at time F just before refilling the container .”, times corresponding to “just before” and to “end time” segment (or identify stages) stages of refueling); provide, as training data, the segmentations of the audio data and the labels to a machine learning model (para 0017: “The method may comprise a step of normalising the usage profile and the capture time of the acoustic responses of each container with respect to time to form the test data. The computing step may utilise a machine learning algorithm .”, usage profile would include segmenting (or stages) over time); train the machine learning model to identify the stages of the re fuel ing operation based on the training data (para 0130: “The same principles of non-use detection apply for acoustic responses i and j. It can also be seen that a usage end time can be determined as the first acoustic response which matches an acoustic response obtained at time F just before refilling the container .”, times corresponding to “just before” and to “end time” segment (or identify stages) stages of refueling); and output a trained machine learning model configured to identify the stages of the re fuel ing operation based on audio (Fig. 1-107: “microphone”, para 0001: “The container may be a container for containing contents such as a liquefied gas or beer, but the invention is not limited to this.”). It would have been obvious to one of ordinary skill in the relevant art before the effective filing date of the claimed invention to have modified the system taught by Gresak with the teachings of Cunnah . One would have added to the “Method, System And Components For Operating A Fuel Distribution System With Unmanned Self-Service Gasoline Stations” of Gresak the “Device, system and method for determining the fill level of a container” of Cunnah . The motivation would have been that the unmanned self-service gasoline station would need a means of measuring the amount of fuel in tanks without requiring employees, otherwise it wouldn’t be unmanned (see Cunnah para 0140: “Another benefit to this approach is the ability to obtain a result from a given tank or cylinder from a remote location. This enables a business to remotely monitor the fill levels of a large number of tanks or cylinders simultaneously and without any further assistance .”). Regarding claim 9 , Gresak in view of Cunnah teaches the system of claim 8, Cunnah further teaches wherein one of the stages of the re fuel ing operation includes fuel flow (para 0130: “The same principles of non-use detection apply for acoustic responses i and j. It can also be seen that a usage end time can be determined as the first acoustic response which matches an acoustic response obtained at time F just before refilling the container .”), and the training includes training the machine learning model to identify fuel flow based on the training data (para 0017: “The method may comprise a step of normalising the usage profile and the capture time of the acoustic responses of each container with respect to time to form the test data. The computing step may utilise a machine learning algorithm .”). Regarding claim 10 , Gresak in view of Cunnah teaches the system of claim 9, Cunnah further teaches wherein the memory, when executed by the processor, causes the processor to: train the machine learning model to determine the fuel flow to be during a first time period( para 0015: “using the usage start time and the usage end time”), and determine a quantity of fuel flow based on a length of the first time period (para 0015: “determining the likely fill level for each acoustic response”). Regarding claim 11 , Gresak in view of Cunnah teaches the system of claim 8, Cunnah further teaches wherein the trained machine learning model is configured to, upon execution: segment the audio data into segments, wherein each segment is associated with a respective one of the stages of the re fuel ing operation (para 0017: “The method may comprise a step of normalising the usage profile and the capture time of the acoustic responses of each container with respect to time to form the test data. The computing step may utilise a machine learning algorithm .”, usage profile would include segmenting (or stages) over time); determine that a first segment of the segments includes audio associated with a fuel flow stage of the re fuel ing operation in which fuel is dispensed (para 0015: “The method may further comprise obtaining a usage end time and determining the likely fill level for each acoustic response using the usage start time and the usage end time. The usage end time may be inferred from location information, or may be inferred by determining a final fill level before refilling the container”); determine a length of time of the first segment (para 0015: “using the usage start time and the usage end time”); and determine a quantity of fuel dispensed based on the length of time of the first segment (para 0015: “determining the likely fill level for each acoustic response”, based on length of time(“multiple acoustic responses”). Regarding claim 12 , Gresak in view of Cunnah teaches the system of claim 11, Cunnah further teaches wherein the trained machine learning model is configured to, upon execution, determine that the first segment of the segments includes audio associated with the fuel flow stage (para 0130: “The same principles of non-use detection apply for acoustic responses i and j. It can also be seen that a usage end time can be determined as the first acoustic response which matches an acoustic response obtained at time F just before refilling the container .”) based upon (1) the determination that a second segment of the segments includes audio associated with a fuel truck approaching the fuel ing station (para 0020: “The usage start time may be determined from either a time of delivery of the container to a customer, or a time of fill or refill of the container (logistics information). Similarly, the usage end time may be determined from either a time of return of the container to the supplier or a time of refill of the container (logistics information).”, machine learning system recognizes a time when refill begins as associated with arrival of the truck), and (2) the determination that third segment of the segments includes audio associated with a grounding of the fuel truck (para 0082: “For example, the communications device 120 may continue to operate as a receiver during the dormant mode, and be responsive to a trigger signal to wake up the transducer device to take measurements”, system would recognize sounds associated with a refueling truck and with grounding as indicative that the system should wake up). Regarding claim 13 , Gresak in view of Cunnah teaches the system of claim 8, Gresak further teaches and output an alert (para 0139-0142: “The commander server gathers information from the measuring systems 31, 76, 74 and other gasoline station equipment 72 and provides different information to the users, including: … fuel volume, normalized fuel volume (at 15.degree . C.) and time stamped last measurement, filling detection, alarming (IFSF compliant: Overfill status, Underfill status, Supply warning , High-High level alarm, High level alarm, Low-Low level alarm, Low level alarm , High water alarm, high sediment alarm), and measurement history.”) if a difference between the quantity of fuel dispensed and a logged amount of fuel dispensed exceeds a threshold (para 0061: “The tank level gauge can support tank probes, leakage sensors and level switches. Tank probe responses can be constantly analyzed by an algorithm, detect dirt, possible measurement noise and other anomalies,”, if the fuel level is different from a threshold then there must be leakage ) . Cunnah further