DETAILED ACTION
Claims 1-20 are pending. Claims 11-20 have been added. Claims 1, 2, and 4 have been amended. Claims 1-20 are rejected.
The instant application has PRO 63/504,291 filed on 05/25/2023.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
Claim 13 is an apparatus claim which contains a server. Specification paragraph 79 appears to show support for a computing device which may include one or more processors. This shows that there is support for computer hardware for apparatus claim 13.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Colley et al., Patent Application Publication No. 2021/0090694 (hereinafter Colley) in view of Adam et al., Patent Application Publication No. 2018/0004868 (hereinafter Adam) and Tasinga et al., Patent Application Publication No. 2023/0144662 (hereinafter Tasinga).
Regarding claim 1, Colley teaches:
A method for provisioning computing resources for analyzing a bioinformatic pipeline, the method comprising (Colley Paragraph [0469], schematic illustrating an exemplary bioinformatics pipeline process):
receiving, by a server over a network connection (Colley Paragraph [0416], includes a server that maintains and manipulates an industry specific data repository. The data operation is received by the collaboration server and used to access and/or manipulate data the database data thereby generating a data response), a set of bioinformatic sample data (Colley Paragraph [1052], The MSI algorithm may be initiated after the raw sequencing data is processed through the bioinformatics pipeline);
providing, via a web application over the network connection (Colley Paragraph [0416], includes a server that maintains and manipulates an industry specific data repository. The data operation is received by the collaboration server and used to access and/or manipulate data the database data thereby generating a data response), a pipeline user interface displaying a plurality of analysis pipelines (Colley Paragraph [1067], generate the bioinformatics pipeline 386 which is stored in both a molecular data lake database 389, molecular lake data 389 and the de-identified single tenant files 380 are accessible to other authorized partners via other interfaces 384);
receiving, via the network connection (Colley Paragraph [0416], includes a server that maintains and manipulates an industry specific data repository. The data operation is received by the collaboration server and used to access and/or manipulate data the database data thereby generating a data response), an indication of a particular analysis pipeline to perform a bioinformatic analysis on the bioinformatic sample data (Colley Paragraph [1054], client, such as an entity that generates a bioinformatics pipeline, can register new samples 1157 and upload variant call text files 1159 for processing to a cloud service 1161);
providing output data via an output user interface (Colley Paragraph [1067], generate the bioinformatics pipeline 386 which is stored in both a molecular data lake database 389, molecular lake data 389 and the de-identified single tenant files 380 are accessible to other authorized partners via other interfaces 384); and
Colley does not expressly disclose:
performing analysis of the particular analysis pipeline using the set of high performance compute resources;
deprovisioning the set of high performance compute resources.
However, Adam teaches:
performing analysis of the particular analysis pipeline using the set of high performance compute resources (Adam Paragraph [0012], include a management cloud resident in the cloud computing environment to provision the high performance computing cloud cluster, Paragraph [0037], The data may also be combined with historical data or other inputs for further analysis);
deprovisioning the set of high performance compute resources (Adam Paragraph [0101], upon termination of a customer subscription, an HPC cloud cluster may be deprovisioned, and in connection with such deprovisioning, all customer data, including all provisioned storage and each head and compute node, may be securely deleted).
The claimed invention and Adam are from the analogous art of systems provisioning resources. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Colley and Adam to have combined Colley and Adam. Adam teaches the benefit of running simulations to predict production rates which can be used to determine appropriate improvements (Paragraph 3).
Colley in view of Adam does not expressly disclose:
determining a set of high performance compute resources to be utilized to process the particular analysis pipeline, wherein the determination is based a performance metric;
wherein the performance metric comprises at least one of a time elapsed for a prior analysis, a CPU processing power used for a prior analysis, a GPU processing power used for a prior analysis and a memory used for a prior analysis;
based on the determining, provisioning the set of high performance compute resources, wherein the provisioning comprises initializing at least one software application for the set of high performance compute resources, wherein the software application is based on the indication of the particular analysis pipeline;
However, Tasinga teaches:
determining a set of high performance compute resources to be utilized to process the particular analysis pipeline (Tasinga Paragraph [0492], one or more PPUs 3200 are configured to accelerate High Performance Computing (“HPC”), data center, and machine learning applications, Paragraph [0506], pipeline manager 3302 configures at least one of DPCs 3306 to implement a neural network model and/or a computing pipeline), wherein the determination is based a performance metric (Tasinga Paragraph [0631], The method of any one of clauses 16-18, wherein dynamically partitioning the one or more neural networks is also performed based, at least in part, on one or more graphics processing unit (GPU) power metrics);
wherein the performance metric comprises at least one of a time elapsed for a prior analysis, a CPU processing power used for a prior analysis, a GPU processing power used for a prior analysis and a memory used for a prior analysis (Tasinga Paragraph [0631], The method of any one of clauses 16-18, wherein dynamically partitioning the one or more neural networks is also performed based, at least in part, on one or more graphics processing unit (GPU) power metrics);
based on the determining, provisioning the set of high performance compute resources, wherein the provisioning comprises initializing at least one software application for the set of high performance compute resources (Tasinga Paragraph [0492], one or more PPUs 3200 are configured to accelerate High Performance Computing (“HPC”), data center, and machine learning applications, Paragraph [0506], pipeline manager 3302 configures at least one of DPCs 3306 to implement a neural network model and/or a computing pipeline), wherein the software application is based on the indication of the particular analysis pipeline (Tasinga Paragraph [0492], one or more PPUs 3200 are configured to accelerate High Performance Computing (“HPC”), data center, and machine learning applications, Paragraph [0506], pipeline manager 3302 configures at least one of DPCs 3306 to implement a neural network model and/or a computing pipeline);
The claimed invention and Tasinga are from the analogous art of pipeline systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Colley in view of Adam and Tasinga to have combined Colley in view of Adam and Tasinga. Tasinga teaches ensuring that memory pages are moved to physical memory of a processor that is accessing pages most frequently thereby improving efficiency for memory ranges (Paragraph 173).
Regarding claim 2, Colley in view of Adam and Tasinga further teaches:
The method of claim 1, wherein the providing a pipeline user interface further comprises providing a customizable analysis pipeline, wherein the customizable analysis pipeline comprises a plurality of reusable code modules (Tasinga Paragraph [0564], pipeline manager 3712 may be used, in addition to an application orchestration system 3728, to manage interaction between applications or containers of deployment pipeline(s) 3710 and services 3620 and/or hardware 3622); and
wherein the customizable analysis pipeline comprises at least one custom analysis module (Colley Paragraph [1284], Confirmed detection of an untargeted variant may be made after analysis in the bioinformatics pipeline, Paragraph [0049], field of the disclosure is complex medical testing order processing and management methods and systems and more specifically adaptive order processing systems for generating customized).
Regarding claim 3, Colley in view of Adam and Tasinga further teaches:
The method of claim 2, wherein the at least one custom analysis module comprises a set of user-defined instructions (Adam Paragraph [0084], Thereafter, authenticated users may login based on their subscription and submit jobs (e.g., via a script) and upload data to the HPC cloud cluster to run a reservoir simulation).
Regarding claim 4, Colley in view of Adam and Tasinga further teaches:
The method of claim 1, wherein the determining the set of high performance compute resources is further based on recording an amount of GPU processing power used to perform a prior analysis (Tasinga Paragraph [0492], one or more PPUs 3200 are configured to accelerate High Performance Computing (“HPC”), data center, and machine learning applications, Paragraph [0506], pipeline manager 3302 configures at least one of DPCs 3306 to implement a neural network model and/or a computing pipeline).
Regarding claim 5, Colley in view of Adam and Tasinga further teaches:
The method of claim 4, the method further comprising: evaluating a set of processing steps required to complete performing analysis of the particular analysis pipeline (Colley Paragraph [1280], The results of sequencing (herein, the “raw sequencing data”) may be passed through a bioinformatics pipeline where the raw sequencing data is analyzed);
determining that a first processing step of the set of processing steps may be performed in parallel with a second processing step of the set of processing steps (Colley Paragraph [1183], Although a flowchart may describe operational acts as a sequential process, many of these acts can be performed in another sequence, in parallel, or substantially concurrently); and
processing the first processing step and second processing step in parallel (Colley Paragraph [1183], Although a flowchart may describe operational acts as a sequential process, many of these acts can be performed in another sequence, in parallel, or substantially concurrently).
Regarding claim 6, Colley in view of Adam and Tasinga further teaches:
The method of claim 5, wherein the providing output data further comprises generating visualization data, wherein the visualization data provided in a markup language file (Colley Paragraph [1498], Once pre-processed, the document may be submitted for optical character recognition (OCR) on the document to convert the text into a machine-readable format, such as text, html, JSON, or XML using other document processing tools).
Regarding claim 7, Colley in view of Adam and Tasinga further teaches:
The method of claim 6, the method further comprising providing a status information to a user (Colley Paragraph [0049], managing those resources to complete order items and ultimately generate order reports and to enable visualization of real time and historical order status).
Regarding claim 8, Colley in view of Adam and Tasinga further teaches:
The method of claim 6, wherein the providing a pipeline user interface displaying a plurality of analysis pipelines further comprises (Colley Paragraph [0049], managing those resources to complete order items and ultimately generate order reports and to enable visualization of real time and historical order status):
evaluating the set of bioinformatic sample data (Colley Paragraph [1054], client, such as an entity that generates a bioinformatics pipeline, can register new samples 1157 and upload variant call text files 1159 for processing to a cloud service 1161);
determining a plurality of pipelines relevant to the set of bioinformatic sample data (Colley Paragraph [1054], client, such as an entity that generates a bioinformatics pipeline, can register new samples 1157 and upload variant call text files 1159 for processing to a cloud service 1161); and
including the plurality of pipelines relevant to the set of bioinformatic sample data in the plurality of analysis pipelines (Colley Paragraph [1054], One exemplary workflow 1153 with respect to the bioinformatics pipeline is shown in FIG. 11b. Referring also to FIG. 11c, a client, such as an entity that generates a bioinformatics pipeline, can register new samples 1157 and upload variant call text files 1159 for processing to a cloud service 1161).
Regarding claim 9, Colley in view of Adam and Tasinga further teaches:
The method of claim 1, the method further comprising: recording a set of performance indicators associated with the performing analysis of the particular analysis pipeline (Colley Paragraph [1486], indicator also may account for the fact that a test suite may comprise dozens, if not hundreds, of different validation checks and that some may return acceptable results while others may indicate errors, missing information, or incomplete information);
storing the set of performance indicators (Colley Paragraph [1501], Exemplary predefined models may be a JSON file, HTML, XML, or other structured data. Predefined models may store a list of features that are derived from the document based on MLA processing); and
utilizing information derived from the set of performance indicators to determine a second set of high performance compute resources to be utilized in performing a second analysis pipeline (Adam Paragraph [0015], includes second program code that upon execution by at least one processing unit in the high performance computing cloud cluster receives the submitted simulation job).
Regarding claim 10, Colley in view of Adam and Tasinga further teaches:
The method of claim 9, the method further comprising: anonymizing metadata contained in the bioinformatic sample data (Colley Paragraph [3081], It will be appreciated that this data may be anonymized);
compressing the bioinformatic sample data for storage (Adam Paragraph [0100], compression may also be used to reduce file transfer sizes); and
transmitting the compressed bioinformatic sample data to cold storage (Colley Paragraph [1006], data format or other transport data used to transmit the received data to the gateway. If the data is to be ingested, gateway 312 gleans metadata from the received data at block 506 which is stored in the data lake catalog 226).
Regarding claim 11, Colley in view of Adam and Tasinga further teaches:
The method of claim 1, wherein the provisioning further comprises: evaluating a plurality of modules contained within the indicated analysis pipeline (Colley Paragraph [1735], orchestration module or software such as orchestrator 5304 may guide the processing of each of the blocks and elements contained in the pipeline 5300 to ensure efficient processing);
determining, using topological sorting, based on input requirements for each of the plurality of modules, an ordering of the modules for the analysis pipeline (Colley Paragraph [1183], many of these acts can be performed in another sequence, in parallel, or substantially concurrently. In addition, the order of the acts may be re-arranged); and
identifying whether any one of the plurality of modules may be run in parallel with other ones of the plurality of modules (Colley Paragraph [1183], many of these acts can be performed in another sequence, in parallel, or substantially concurrently. In addition, the order of the acts may be re-arranged).
Regarding claim 12, Colley in view of Adam and Tasinga further teaches:
The method of claim 11, further comprising: determining a complete set of software dependencies required for the indicated analysis pipeline (Colley Paragraph [2995], Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system); and
mounting a machine image to a provisioned high compute resource (Tasinga Paragraph [0429], display device 2611 can include a head mounted display (HMD) such as a stereoscopic display device for use in virtual reality (VR) applications or augmented reality (AR) applications).
Claims 13-20 are rejected in the same manner as claims 1-12 but are merely directed to a different embodiment of the same invention (method, apparatus).
Response to Arguments
Applicant's arguments filed 07/28/2025 have been fully considered and they are either persuasive or they are not persuasive. A detailed explanation is provided below.
Applicant’s arguments, see pages 8-14, filed 07/28/2025, with respect to claims 1-10 regarding the rejection under 35 U.S.C. 101 have been fully considered and are persuasive. The rejection of claims 1-10 under 35 U.S.C. 101 has been withdrawn.
On pages 14-16, Applicant argues that Colley does not disclose the providing and receiving limitations, the Examiner disagrees. Colley teaches a bioinformatics pipeline which is stored in both a molecular data lake database, molecular lake data and the de-identified tenant files are accessible to other authorized partners via other interfaces 384 (Paragraph 1067). This shows that Colley does disclose this limitation since the molecular lake data contains the bioinformatics pipeline which is accessible via other interfaces. Colley further teaches “a user interface (UI) to implement a patient record analysis system responsible for managing the flow of information between systems of the instant architecture and/or stage of the processing pipeline” (Paragraph 1678). Therefore, Colley does disclose the providing limitation. Colley teaches an entity that generates a bioinformatics pipeline that can register new samples and upload variant call text files for processing to a cloud service (Paragraph 1054). This shows that there is an indication to perform a bioinformatic analysis on bioinformatic sample data. Therefore, Colley does disclose these limitations.
Regarding the other amendments, the newly cited Tasinga reference discloses these limitations.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Potter et al., Patent Application Publication No. 2011/0055720.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUSTIN D EYERS whose telephone number is (408)918-7562. The examiner can normally be reached Monday-Thursday 9:00am-7:00pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amy Ng can be reached at (571)270-1698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DUSTIN D EYERS/ Examiner, Art Unit 2164
/AMY NG/ Supervisory Patent Examiner, Art Unit 2164