DETAILED ACTION n otice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claims 1-20 are presented for the examination. The following is a quotation of 35 U.S.C. 112(f): Elemen t in Claim for a Combination. —An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof .The following is a quotation of pre-AlA35 U.S.C. 112, sixth paragraph: An element in a claim for a combination maybe expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre- AlA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection |, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre- AlA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre- AlA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre- AlA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre- AIA35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre- AlA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: computing devices of a service provider network configured to in claim 11. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre- AlA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre- AlA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre- AlA 35 U.S.C. 112, sixth paragraph (e.g., byreciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. § 101 2. 35 U.S.C. 101 reads as follows Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 , 11 , 19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. As to Claims 1, 11 , 19 have been rejected under 35 USC 101 for abstract idea without significantly more. Under Step 2A, Prong 1, “determine, based on respective times when the results are received from the quantum hardware provider ” recite a mental process since “ determining ” is functions that can be reasonably performed in the human mind with the aid of pen and paper through observation, evaluation, judgment, opinion. Under Prong 2, the additional element “ receive, from customers of the service provider network, requests to execute quantum objects using a quantum processing unit (QPU) of a quantum hardware provider that is made accessible via the service provider network; logically map respective ones of the requests into positions in a queue for execution using the QPU; generate, using the machine learning model, predicted wait times for respective ones of the requests based, at least in part, on the positions in the queue; provide the predicted wait times to the respective customers; submit the requests for execution using the QPU; receive, from the quantum hardware provider, results of the execution of the requests, ground truth wait times for the requests; generate a labeled dataset based, at least in part, on: information pertaining to the quantum objects of the customers; the predicted wait times; and the ground truth wait times; and provide the labeled dataset as an input to re-train the machine learning model using a supervised learning technique. ” are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component, or merely a generic computer or generic computer components to perform the judicial exception, Accordingly, the additional elements do not integrate the recited judicial exception into a practical application, and the claim is therefore directed to the judicial exception. See MPEP 2106.05(f). Under Step 2B, the additional elements “ receive, from customers of the service provider network, requests to execute quantum objects using a quantum processing unit (QPU) of a quantum hardware provider that is made accessible via the service provider network; logically map respective ones of the requests into positions in a queue for execution using the QPU; generate, using the machine learning model, predicted wait times for respective ones of the requests based, at least in part, on the positions in the queue; provide the predicted wait times to the respective customers; submit the requests for execution using the QPU ” - this generally have been a mental process although the “ quantum processing unit (QPU) ” , “ machine learning model’, “ queue” could be generic computer components described it as actual computer hardware , “ground truth wait times for the requests; generate a labeled dataset based, at least in part, on: information pertaining to the quantum objects of the customers; the predicted wait times; and the ground truth wait times; and provide the labeled dataset as an input to re-train the machine learning model using a supervised learning technique.” - this is mere instructions to apply the mental process under mpep 2106.05(f), amounts to merely generally linking the use of the judicial exception to a particular technological environment or field or use, and is merely applying the judicial exception, therefore, does not amount to significantly more, hence, cannot provide an inventive concept. 17 . The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application. See MPEP 2106.05(d). Thus, the claim is not patent eligible Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 11 , 19 are rejected under 35 U.S.C. 103 as being unpatentable over Kannan ( US 20210173588 A1 ) in view of Santaus ( US 20230409406 A1) in view of ZHU(CN 109670784 A) and further in view of Roisman ( US 20230222177 A1). As to claim 1, Kannan teaches one or more computing devices of a service provider network configured to implement: a quantum computing service( FIG. 3A, the storage system 306 is coupled to the cloud services provider 302 via a data communications link 304. The data communications link 304 may be embodied as a dedicated data communications link, as a data communications pathway that is provided through the use of one or data communications networks such as a wide area network (‘WAN’) or LAN, para[ 0129], ln 1-10/ The cloud services provider 302 depicted in FIG. 3A may be embodied, for example, as a system and computing environment that provides a vast array of services to users of the cloud services provider 302 through the sharing of computing resources via the data communications link 304. to a shared pool of configurable computing resources such as computer networks, servers, storage, applications and services, and so on., para[0130], ln 1-11/ The storage system 306 depicted in FIG. 3B may include a vast amount of storage resources 308… . storage resources 308 may alternatively include non-volatile phase-change memory (‘PCM’), quantum memory that allows for the storage and retrieval of photonic quantum information, para[0141], ln 15-20) ; logically map respective ones of the requests into positions in a queue for execution using the QPU ( Each queue may comprise a plurality of entries 1030 a -1030 d for storing one or more corresponding requests . For example, a device unit for a corresponding SSD may include queues to store at least read requests, write requests, trim requests, erase requests and so forth, para[0271], ln 6-14 / an I/O scheduler schedules read and write operations for one or more storage devices. In various embodiments, the I/O scheduler may maintain a separate queue (either physically or logically) for each storage device. In addition, the I/O scheduler may include a separate queue for each operation type supported by a corresponding storage device , para[0257], ln 1-10/ storage resources 308… . storage resources 308 may alternatively include non-volatile phase-change memory (‘PCM’), quantum memory that allows for the storage and retrieval of photonic quantum information, para[0141], ln 15-20) ; Santaus teaches and a machine learning model, wherein the one or more computing devices that implement the quantum computing service are configured to: receive, from customers of the service provider network, requests to execute quantum objects using a quantum processing unit (QPU) of a quantum hardware provider that is made accessible via the service provider network( Quantum circuits , such as the circuit 100, can be used for various purposes. Quantum circuits may be used, by way of example only, to perform optimizations, machine learning , para[0019], ln 1-4/ When the accelerator request 318 is generated by the job 302, the orchestration engine 308 submits the accelerator portion 306 to, for example, the QPU 324, para[0028]/ In this case, the job 412 is one of the jobs for which an accelerator job has been requested. In this example, the jobs 418, 420, and 412 are waiting for results and the accelerator portions of these jobs has been submitted to the accelerators 424 by the orchestration engine 402.When an accelerator job is completed and results are returned , the job for which results have been returned is placed in the ready to resume queue 426, para[0035], ln 3-8 to para[0036], ln 1-3/ In the method 500 , a job is received 502 into a computing system that includes accelerators, which may be remote (e.g., accessed over a network ). Resources may be allocated to job if the resources are available, para[0039]) ; generate, using the machine learning model, predicted wait times for respective ones of the requests based, at least in part, on the positions in the queue ( the job 412 is one of the jobs for which an accelerator job has been requested , para[0035], ln 5-8/ When executing the hybrid application 208 and when a quantum job is required, the quantum job may be performed in a …..When the quantum job is completed, results may be returned to the application 216 or to the computer 206 ., para[0024], ln 3-14/ The orchestration engine 308 may include or use a machine learning model, which has been trained to estimate or infer an execution time associated with executing a job at the QPU 324 (and/or other accelerators), to determine how long the job 302 will need to wait for the results 326 of the accelerator portion 306 are returned by the QPU 324, para[0029], ln 1-9/Assuming results have been returned from the accelerators 424 for the job 412 , the job 412 is placed in the ready to resume queue 426 . The orchestration engine 402 will allocate resources back to the job 412, para[0036], ln 3-8) ; submit the requests for execution using the QPU; receive, from the quantum hardware provider, results of the execution of the requests( accelerator such as a GPU (Graphics Processing Unit) may be uses, para[0002], ln 5-7 / In this case, the job 412 is one of the jobs for which an accelerator job has been requested. In this example, the jobs 418 , 420 , and 412 are waiting for results and the accelerator portions of these jobs has been submitted to the accelerators 424 by the orchestration engine 402 . When an accelerator job is completed and results are returned , the job for which results have been returned is placed in the ready to resume queue 426 . Assuming results have been returned from the accelerators 424 for the job 412 , the job 412 is placed in the ready to resume queue 426 . The orchestration engine 402 will allocate resources back to the job 412 when resources are available. This removes the job 412 from the ready to resume queue 426 and back to the allocated resources 404 , para[0035], ln 5-12 to para[0036]) . It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Kannan with Santaus to incorporate the above feature because this provides accelerator/quantum computing functions as a service and to managing/allocating resources for jobs that require accelerator/quantum functions. Zhu teaches provide the predicted wait times to the respective customers ( provide waiting time of method according to historical data to estimate the need to spend time of handling different service types, then based on these estimation estimating client needs to wait the number of time and the estimated result tells the client and client arrangement journey of itself, Sec: notification of the specification , ln 1-10/ set each client transaction service time estimation according to the first client, the second client of each client service time estimate and the customer has transacted service time. window number of business transaction, obtain a waiting time of the customer. Specifically, according to the queuing sequence, the need to calculate all the clients before the waiting time of the client is divided into a first client and a second client, Sec: In one or more embodiment of the present specification, ln 10-20) ; determine, based on respective times when the results are received from the quantum hardware provider, ground truth wait times for the requests; generate a labeled dataset based, at least in part, on: information pertaining to the quantum objects of the customers; the predicted wait times; and the ground truth wait times ( the device can execute the corresponding program is executed by the processor in the computer to realize . The windows operating system using C + + language realized on PC end, linux system, or other such as android, iOS-system programming language implemented in the intelligent terminal, and processing logic based on quantum computer implementation and the like , Sec: The specification provided by the embodiment A inform, ln 1-16/ based on the estimated value and the different service type queuing order of time, calculating the waiting time of the client , it can according to the queuing sequence, all the current client is divided into a first client and a second client, the first client dataset comprises non-transacted service of client, the second set of client includes being transacted service of client; then, set each client transaction service time estimation according to the first client, the second client of each client service time estimate and the customer has transacted service time. window number of business transaction, obtain a waiting time of the customer . Specifically, according to the queuing sequence, the need to calculate all the clients before the waiting time of the client is divided into a first client and a second client, the first client dataset comprises non-transacted service of client, the second set of client includes client is transacting business, the estimated transaction time of each client service first client to be transacted , marked as T1, the second customer each customer transacting business time estimated time of the business transaction with the subtraction to obtain the second client service for each customer transaction remaining time. the second client service of each client handling remaining time, marked as T2 , then adding T1 and T2 and is marked as T, sec: In one or more embodiment of the present specification, ln 1-20/ It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Kannan and Santaus with Zhu to incorporate the above feature because this improves the customer experience, increase viscosity of the client . Roisman teaches provide the labeled dataset as an input to re-train the machine learning model using a supervised learning technique ( In the example, at 270, the method includes training (or re-training) a machine learning model using the labeled dataset . Training can include applying at least a portion of the labeled dataset to the machine learning model as a training dataset. The method can optionally include validating and/or testing the machine learning model using the labeled dataset. In one example, the method can include dividing the labeled dataset into subsets, and each subset can be used for one aspect of developing the machine learning model. For example, the labeled dataset can be partitioned into training, validation, and testing dataset s. In some cases, instead of partitioning the labeled dataset, operations 210-260 can be used to generate multiple labeled datasets that can be used for various aspects of developing the machine learning model, para[0040]/ forming a labeled dataset, wherein the forming comprises selecting a data entry from the data entries, executing the rules set on the data entry to obtain a result, and using the result as a label for the data entry; and forming a training dataset from the labeled dataset; and applying the training dataset to a machine learning model during training of the machine learning model , para[0077], ln 14-22) . It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Kannan , Santaus and Zhu with Roisman to incorporate the above feature because this allows sufficiently large datasets to be generated quickly and made available for training, validating, or testing a machine learning model. As to claim 11 , it is rejected for the same reason as to claim 1 above. In additional, Santaus teaches provide a recommendation to the customer of one or more possible QPUs to be used to execute the request, wherein the recommendation comprises the predicted wait times( Quantum circuits, such as the circuit 100, can be used for various purposes. Quantum circuits may be used, by way of example only, to perform optimizations, machine learning, simulation, and the like. Performing large numbers of shots, however, requires time during which the job that called or required the QPU may be waiting for the output of the QPU, para[0029]/ The orchestration engine 308 may include or use a machine learning model, which has been trained to estimate or infer an execution time associated with executing a job at the QPU 324 (and/or other accelerators), to determine how long the job 302 will need to wait for the results 326 of the accelerator portion 306 are returned by the QPU 324 , para[0019]/ para[0029], ln 1-10) . As to claim 19 , it is rejected for the same reason as to claim 1 above. Allowable Subject Matter Claims 2-10, 12-18, 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion US 20220156631 A1 teaches a machine-learning model that predicts probability of success for deployment of an operator in an environment with a namespace of a platform as a service (PaaS) cloud; and a deployment component that receives a first operator and a first namespace and employs the machine-learning model to predict success of deployment of the first operator in a first environment. US 20220051118 A1 teaches the system can use the predicted characteristics to determine locations for subsequent measurement, which can be labeled according to predicted characteristics inferred by the system 100, and used to re-train one or more machine learning models implemented by the analytics engine 115 for characteristic predictions. US 20220156631 A1 teaches Use feedback from SME to validate that { r.sub.i } is correct Update training data ( D.sub.i ) with m.sub.i and its label if necessary Retrain a classifier f.sub.i using D.sub.i Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT LECHI TRUONG whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-3767 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT 10-8 PM . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Young Kevin can be reached on (571)270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LECHI TRUONG/ Primary Examiner, Art Unit 2194 /KEVIN L YOUNG/ Supervisory Patent Examiner, Art Unit 2194