Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
2. NO restrictions warranted at initial time of filing for patent.
Priority
3. Applicant claims domestic priority under 35 USC 119e to provisional application filed on 11/28/2023.
Oath/Declaration
4. Applicant’s Oath was filed on 08/05/2024.
Drawings
5. Applicant’s drawings filed on 08/05/2024 has been inspected and is in compliance with MPEP 608.01.
Specification
6. Applicant’s specification filed on 08/05/2024 has been inspected and is in compliance with MPEP 608.02.
Claim Objections
7. NO objections warranted at initial time of filing for patent.
Remarks
8. Examiner request Applicant review relevant prior art under the conclusion of this office action.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
9. Claims 21-25, 27-35, 37 and 38 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent No. 11922222 hereinafter Chawla in view of U.S. Publication No. 20220138004 hereinafter Nandakumar.
As per claim 21, Chawla discloses:
A system for automating code (Col. 134 Lines 34-46 “In certain embodiments, by changing one component, the control plane system 524 can automatically configure an entire instance of the data intake and query system 102 for the tenant. For example, if the tenant modifies an indexing node, then the control plane system 524 may require that all components of a data intake and query system 102 be instantiated for that tenant. In this way, the control plane system 524 can sandbox different tenants or data. In some embodiments, the control plane system 524 can automatically configure some of the components of the data intake and query system 102 for single tenancy),
the system comprising at least one processor (Fig. 1, Col. 6 Line 62 – Col. 7 Line 15 “ A host device 104 can correspond to a distinct computing device or system that includes or has access to data that can be ingested, indexed, and/or searched by the system 102. Accordingly, in some cases, a client device 106 may also be a host device 104 (e.g., it can include data that is ingested by the system 102 and it can submit queries to the system 102). The host devices 104 can include, but are not limited to, servers, sensors, routers, personal computers, mobile devices, internet of things (IOT) devices, or hosting devices, such as computing devices in a shared computing resource environment on which multiple isolated execution environment (e.g., virtual machines, containers, etc.) can be instantiated, or other computing devices in an IT environment (e.g., device that includes computer hardware, e.g., processors, non-transitory, computer-readable media, etc.). In certain cases, a host device 104 can include a hosted, virtualized, or containerized device, such as an isolated execution environment, that shares computing resources (e.g., processor, memory, etc.) of a particular machine (e.g., a hosting device or hosting machine) with other isolated execution environments.”) configured to:
receive a code and associated metadata information for configuring an application to be run as an image on a software platform (Fig. 14, Col. 136 Line 43-48 “At block 1402, the control plane system 524 receives configurations of a component. In some embodiments, the component can correspond to a component of the data intake and query system 102, such as an indexing node 704, ingest manager 716, partition manager 708, search node 806, search head 804, bucket manager 714, etc.” Col. 136 Lines 52-58 “ The configurations can correspond to versions (e.g., software versions, releases, etc.) and/or parameters of the component. In some cases, the configurations can be based on the type of the component, a tenant associated with the component, and/or a type of the shared computing resource environment, or provider of the shared computing resource environment in which the component is to be instantiated.” Col. 137 Lines 13-17 “As described herein, in certain embodiments, the configurations can correspond to specific parameters of the component. As mentioned, different components of the data intake and query system 102 can have different parameters associated with them. For example, indexing nodes 704 can have parameters associated with its various indexing policies (e.g., bucket creation policy, bucket merge policy, bucket roll-over policy, bucket management policy), bucket sizes, or authorizations, etc. Similarly, a search head 804 can have parameters associated with its various searching policies (e.g., search node mapping policy, etc.), the number of search nodes to use for a query, number of concurrent queries to support, etc.”),
the metadata information including a set of functionalities associated with the application (Col. 138 Lines 20-28 “In certain embodiments, the configurations can correspond to an unreleased version of the component. For example, as developers modify components, they may save the modifications as an alpha or beta version that is not accessible to the public, or they may save the version as a test version. In some such cases, the test version of the component can be received as a configuration. Any one or any combination of the above-described configurations can be received by the control plane system 524. );
analyze the code and the associated metadata information to assess infrastructure resources required for running the application on the software platform; and generate an image construction file from the code and associated metadata based on the assessment (Col. 138 Line 53- Col. 139 Line 11 “At block 1404, the control plane system 524 generates an image of a modified component. In certain embodiments, the control plane system 524 generates the image itself or requests another computing device or system to generate the image. The modified component can correspond to a modified version of the component based on the configurations received by the control plane system 524. For example, if the component is an indexer version 1.0, and the configurations include version 2.0, the modified component can be an indexer version 2.0. In such an example, the image of the indexer version 2.0 can include the computer executable instructions, system tools, system libraries, and settings, to enable an isolated execution environment 1314 to be configured as an indexer version 2.0. As another example, if the component is an indexer version 2.0 with a parameter indicating that buckets are to be converted from hot buckets to warm buckets every minute and the configurations include changing the parameter so that that buckets are converted from hot buckets to warm buckets every 45 seconds, the modified component can be an indexer version 2.0 with a parameter indicating that buckets are to be converted from hot buckets to warm buckets every 45 seconds. In such an example, the image of the can include the code, tools, libraries, and settings to enable an isolated execution environment 1314 to be configured as the indexer with the desired version and configurations.”),
wherein the image construction file is used to construct the image in a
manner incorporating a correspondence between the infrastructure resources required for running the application and the set of functionalities, thereby enabling a running instance of the application to accomplish the set of functionalities (Col. 139 Lines 12-23 “As described herein, an image can include computer executable instructions, system tools, system libraries and settings, and other data so that when the image is instantiated or executed, an application or program is provided within an isolated execution environment 1314. A non-limiting example of an image is a Docker container image. Accordingly, the image of the modified component can include computer executable instructions, system tools, system libraries, and component parameters so that when the image of the modified component is instantiated or executed, an instance of the modified component is generated within an isolated execution environment 1314.”)
Chawla does not disclose
automating code deployment
Nandakumar discloses:
automating code deployment (para 0013 “Certain aspects of the present disclosure provide for a data science workflow framework that incorporates tools and methods for the seamless and user-friendly construction of AI and/or ML pipelines and is configured to: remove or reduce integration barriers; reduce required expertise for the development and deployment of AI products framework; simplify/streamline integration of raw and heterogeneous data; enable high-level and intuitive abstractions that specify pipeline construction requirements; enable an intuitive and/or automated means, such as a graphical user interface (GUI), for a user to specify ML model standards and customize/configure components, data pipelines, data processing, data transport mechanisms, streaming analytics, AI model libraries, pipeline execution methods, orchestration, adapters, and computing resources.”)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of configure some of the components Chawla to include automating code deployment, as taught by Nandakumar.
The motivation would have been to reduce required expertise for the development and deployment of AI products framework (Nandakumar para 0013).
As per claim 22, Chawla in view of Nandakumar discloses:
The system of claim 21, wherein the at least one processor is further configured to employ artificial intelligence for at least one of analyzing the code and the associated metadata information or generating the image construction file (Nandakumar para 0076 “ FIG. 1 depicts a functional block diagram 100 of an artificial intelligence operating system (“AiOS”) 102. In accordance with various embodiments, AiOS 102 comprises an integrated environment comprising one or more software component 104 each pre-loaded with an AI OS intelligent functionality. Software component 104 contains a built and tested code block 106 that provides a mechanism 108 to encapsulate one or more operations of a ML product lifecycle. AiOS 102 may comprise a library 110 of reusable or customizable components. In accordance with various embodiments, one or more component 104 may be linked in a sequential connection 114 and/or parallel connection 116 comprising a topology of pipeline 118 for building an analytic ML model. Software component 104 may be configured to accept one or more streaming data sources 120. An analytic model created from pipeline 118 consumes data source 120, typically in the form of a data stream. An analytic data provide may utilize one or more data sources including, for example, APACHE SPARK, HADOOP, AMAZON REDSHIFT, AZURE SQL Data Warehouse, MICROSOFT SQL Server, and/or TERADATA. The analytic data provider or source may utilize one or more example infrastructure systems including: on-premises hardware, such as in-office computing and/or proprietary datacenter computing; or off-premises hardware, such as cloud infrastructure including AMAZON WEB SERVICES, MICROSOFT AZURE, IBM BLUEMIX, and/or GOOGLE Cloud Platform. In accordance with various embodiments, AiOS 102 is configured to enable user-friendly data science experimentation, exploration, analytic model execution, prototyping, pipeline 118 construction, to establish a complete end-to-end, transparent, AI pipeline building process for the development, production, and deployment of reproducible, scalable, and interoperable ML models and AI applications with governance.” Though Chawla discloses analyzing code, Nandakumar discloses employ artificial intelligence for at least one of analyzing the code. The motivation would have been to reduce required expertise for the development and deployment of AI products framework (Nandakumar para 0013).).
As per claim 23, Chawla in view of Nandakumar discloses:
The system of claim 21, wherein the at least one processor is further configured to employ artificial intelligence to infer at least one rule setting, or parameter for generating the image construction file (Nandakumar para 0096 “In accordance with certain aspects of the present disclosure, block definition process 400 may proceed by executing one or more operations for publishing the block in a block library of the AI OS (Step 420). Block definition process 400 may proceed by executing one or more operations for estimating one or more computing resources for the block based on the block's function, dataflow, and execution requirements (Step 422). In various embodiments, the one or more computing resources may be estimated by using one or more non-limiting methods, such as heuristic, meta-heuristic, rules engine or algorithm based on historical data, data flow simulations, source code static or dynamic tracing, resource management tool, and combinations thereof.” Though Chawla discloses rule, Nandakumar discloses employ artificial intelligence to infer at least one rule setting. The motivation would have been to reduce required expertise for the development and deployment of AI products framework (Nandakumar para 0013).).
As per claim 24, Chawla in view of Nandakumar discloses:
The system of claim 21, wherein the at least one processor is further configured to scan the code and infer dependencies necessary for running the application on the software platform and include references for the dependencies in the image construction file (Chawla Col. 99 Lines 6-31 “As mentioned, the metadata catalog 521 can include annotations or information about the datasets, fields, users, or applications of the system 102 and can be revised as additional information is learned. Non-limiting examples of annotations that can be added to the dataset configuration records 904, other configurations, annotation tables or entries, or other locations of the metadata catalog 521 or system 102, include but are not limited to, the identification and use of fields in a dataset, number of fields in a dataset, related fields, related datasets, number (and identity) of dependent datasets, number (and identity) of datasets depended on, capabilities of a dataset or related dataset source or provider, the identification of datasets with similar configurations or fields, units or preferred units of data obtained from a dataset, alarm thresholds, data categories (e.g., restrictions), users or groups, applications, popular field, datasets, and applications (in total or by user or group), etc. In certain cases, the annotations can be added as the system 102 monitors system use (e.g., processing queries, monitoring query execution, user interaction, etc.) or as the system 102 detects changes to the metadata catalog 521 (e.g., one manual/automated change can lead to another automated change), etc. Additional information regarding example annotations are described in the Incorporated Applications, each of which is incorporated herein by reference for all purposes.”).
As per claim 25, Chawla in view of Nandakumar discloses:
The system of claim 21, wherein the associated metadata information includes blacklisted elements, and wherein at least one processor is further configured to identify the blacklisted elements and generate the image construction file such that the running instance of the application denies requests associated with the blacklisted elements (Chawla Col. 50 Line 4-17 While shown in FIG. 6A as distinct, these ingestion buffers 606 and 610 may be implemented as a common ingestion buffer. However, use of distinct ingestion buffers may be beneficial, for example, where a geographic region in which data is received differs from a region in which the data is desired. For example, use of distinct ingestion buffers may beneficially allow the intake ingestion buffer 606 to operate in a first geographic region associated with a first set of data privacy restrictions, while the output ingestion buffer 610 operates in a second geographic region associated with a second set of data privacy restrictions. In this manner, the intake system 110 can be configured to comply with all relevant data privacy restrictions, ensuring privacy of data processed at the data intake and query system 102.” Col. 89 Line 57-67 “In some embodiments, the dataset association records 902 can also be used to limit or restrict access to datasets and/or rules. For example, if a user uses one dataset association record 902 they may be unable to access or use datasets and/or rules from another dataset association record 902. In some such embodiments, if a query identifies a dataset association record 902 for use but references datasets or rules of another dataset association record 902, the data intake and query system 102 can indicate an error.” Col. 99 Lines 6-31 “As mentioned, the metadata catalog 521 can include annotations or information about the datasets, fields, users, or applications of the system 102 and can be revised as additional information is learned. Non-limiting examples of annotations that can be added to the dataset configuration records 904, other configurations, annotation tables or entries, or other locations of the metadata catalog 521 or system 102, include but are not limited to, the identification and use of fields in a dataset, number of fields in a dataset, related fields, related datasets, number (and identity) of dependent datasets, number (and identity) of datasets depended on, capabilities of a dataset or related dataset source or provider, the identification of datasets with similar configurations or fields, units or preferred units of data obtained from a dataset, alarm thresholds, data categories (e.g., restrictions), users or groups, applications, popular field, datasets, and applications (in total or by user or group), etc. In certain cases, the annotations can be added as the system 102 monitors system use (e.g., processing queries, monitoring query execution, user interaction, etc.) or as the system 102 detects changes to the metadata catalog 521 (e.g., one manual/automated change can lead to another automated change), etc. Additional information regarding example annotations are described in the Incorporated Applications, each of which is incorporated herein by reference for all purposes.”).
As per claim 26, Chawla in view of Nandakumar discloses:
The system of claim 21, wherein the at least one processor is further configured to incorporate into the image a mapping between the set of functionalities and a list of infrastructure resources provided by the software platform and associated with the set of functionalities, thereby incorporating the correspondence between the infrastructure resources required for running the application and the set of functionalities (Chawla Col. 68 Line 11-43 “ In some cases, the information relating to the indexing nodes 704 includes information relating to one or more indexing node assignments. As described herein, an indexing node assignment can include an indication of a mapping between a particular indexing node 704 and an identifier (for example, a tenant identifier, a partition manager identifier, etc.) or between a particular node and a data record received from the intake system 110. In this way, an indexing node assignment can be utilized to determine to which indexing node 704 a partition manager 708 should send data to process. For example, an indexing node assignment can indicate that a particular partition manager 708 should send its data to one or more particular indexing nodes 704. As another example, an indexing node assignment can indicate that some or all data associated with a particular identifier (for example, data associated with a particular tenant identifier) should be forwarded to one or more a particular indexing node 704 for processing. In some cases, a computing device associated with the resource catalog 720 can determine an indexing node assignment and can store the indexing node assignment in the resource catalog 720. In some cases, an indexing node assignment, is not stored in the resource catalog 720. For example, each time the resource monitor 718 receives a request for an indexing node assignment from a partition manager 708, the resource monitor 718 can use information stored in the resource catalog 720 to determine the indexing node assignment, but the indexing node assignment may not be stored in the resource catalog 720. In this way, the indexing node assignments can be altered, for example if necessary based on information relating to the indexing nodes 704.”).
As per claim 28, Chawla in view of Nandakumar discloses:
The system of claim 27, wherein the at least one processor is configured to generate the mapping (Chawla Fig. 1, Col. 6 Line 62 – Col. 7 Line 15 and Fig. 14, Col. 136 Line 43-48)
.
As per claim 29, Chawla in view of Nandakumar discloses:
The system of claim 21, wherein the required infrastructure resources include software libraries providing common functionalities for multiple applications (Col. 131 Lines 17-31 “For example, the image of the modified component can include the software version for the modified component, any particular parameters selected or modified by the tenant, software add-ons, system configurations, libraries, etc., to enable a host device 1304 to generate an isolated execution environment 1314 configured as the modified component. The parameters and/or software add-ons may be preconfigured or preinstalled with the pre-modified component or may have been installed/configured after the pre-modified component was instantiated (e.g., an image generated from the pre-modified component when it is first instantiated may be different from an image generated from the pre-modified component at a later time after one or more add-ons are installed, newer software versions are installed or parameters are changed).).
As per claim 30, Chawla in view of Nandakumar discloses:
The system of claim 21, wherein the required infrastructure resources include at least one of central processing unit (CPU) load, memory storage, network bandwidth, database space, libraries, frameworks, or peripheral applications (Chawla 67 Lines 16-32 “In some cases, the resource catalog 720 includes one or more metrics associated with one or more of the indexing nodes 704 in the indexing system 112. For example, the metrics can include, but are not limited to, one or more performance metrics such as CPU-related performance metrics, memory-related performance metrics, availability performance metrics, or the like. For example, the resource catalog 720 can include information relating to a utilization rate of an indexing node 704, such as an indication of which indexing nodes 704, if any, are working at maximum capacity or at a utilization rate that satisfies utilization threshold, such that the indexing node 704 should not be used to process additional data for a time. As another example, the resource catalog 720 can include information relating to an availability or responsiveness of an indexing node 704, an amount of processing resources in use by an indexing node 704, or an amount of memory used by an indexing node 704.”).
As per claim 31, Chawla in view of Nandakumar discloses:
The system of claim 21, wherein the required infrastructure resources include at least one of frameworks, communication managers, storage managers, or memory (Chawla 67 Lines 16-32 “In some cases, the resource catalog 720 includes one or more metrics associated with one or more of the indexing nodes 704 in the indexing system 112. For example, the metrics can include, but are not limited to, one or more performance metrics such as CPU-related performance metrics, memory-related performance metrics, availability performance metrics, or the like. For example, the resource catalog 720 can include information relating to a utilization rate of an indexing node 704, such as an indication of which indexing nodes 704, if any, are working at maximum capacity or at a utilization rate that satisfies utilization threshold, such that the indexing node 704 should not be used to process additional data for a time. As another example, the resource catalog 720 can include information relating to an availability or responsiveness of an indexing node 704, an amount of processing resources in use by an indexing node 704, or an amount of memory used by an indexing node 704.”).
.
As per claim 32, Chawla in view of Nandakumar discloses:
The system of claim 21, wherein the at least one processor is further configured to predict future patterns of resource consumption (Nandakumar para 0018 “In certain exemplary embodiments, an execution engine may perform a variety of functions including, but not limited to, tracking information in a data structure, deriving and resolving dependencies, storing-receiving metadata and/or future data or results from asynchronous operations or call backs, performing fault-tolerant, processing exceptions and execution errors, and combinations thereof and/or the like. In certain exemplary embodiments, an execution engine control logic may be derived from one or more annotated decorators of one or more blocks enabling asynchronous, parallel, and portable execution of heterogenous pipeline workloads independent of resource allocations or constraints. ” para 0105 “Certain objects and advantages of the present disclosure is an AI OS (e.g., AiOS 102 of FIG. 1) that supports the following: [0106] 1. Block Intelligence: This is the building block of the pipelines that are run. They need to be estimated clearly in terms of the resources that they require (which depends on the inputs provided), as well as the size and volume of outputs that they produce (both batch and streaming). In various embodiments, many test cases are generated under varying conditions and testing the block against these. The data thus gathered is used to build ML models which can predict their performance based on the inputs provided.” Though Chawla discloses resource, Nandakumar discloses predict future patterns of resource consumption. The motivation would have been to reduce required expertise for the development and deployment of AI products framework (Nandakumar para 0013).).).
As per claim 33, Chawla in view of Nandakumar discloses:
The system of claim 21, wherein analyzing the code and the associated metadata information includes identifying patterns and extracting meaningful information from the code and metadata information without human intervention (Chawla Col. 88 Line 18-28 “Similarly, if a user enters a query, the metadata catalog 521, can edit the dataset configuration record 904. With continued reference to the example above, if another user enters the same query or the same user executes the query at a later time (with or without prompting by the system 102), the metadata catalog 521 can edit the corresponding dataset configuration record 904. For example, the metadata catalog 521 can increment a count for the number of times the query has been used, add information about the users that have used the query, include a job ID, query results, and/or query results identifier, each time the query is executed, etc..” Col. 142 Line 58 -Col. 143 Line 3 “Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.”).
As per claim 34, Chawla in view of Nandakumar discloses:
The system of claim 21, wherein the at least one processor is further configured to scan the code in accordance with a set of rules included in the associated metadata information (Chawla Col. 99 Lines 6-31 “As mentioned, the metadata catalog 521 can include annotations or information about the datasets, fields, users, or applications of the system 102 and can be revised as additional information is learned. Non-limiting examples of annotations that can be added to the dataset configuration records 904, other configurations, annotation tables or entries, or other locations of the metadata catalog 521 or system 102, include but are not limited to, the identification and use of fields in a dataset, number of fields in a dataset, related fields, related datasets, number (and identity) of dependent datasets, number (and identity) of datasets depended on, capabilities of a dataset or related dataset source or provider, the identification of datasets with similar configurations or fields, units or preferred units of data obtained from a dataset, alarm thresholds, data categories (e.g., restrictions), users or groups, applications, popular field, datasets, and applications (in total or by user or group), etc. In certain cases, the annotations can be added as the system 102 monitors system use (e.g., processing queries, monitoring query execution, user interaction, etc.) or as the system 102 detects changes to the metadata catalog 521 (e.g., one manual/automated change can lead to another automated change), etc. Additional information regarding example annotations are described in the Incorporated Applications, each of which is incorporated herein by reference for all purposes.”).
As per claim 35, Chawla in view of Nandakumar discloses:
The system of claim 34, wherein the at least one processor is further configured to take a remedial action upon determining non-compliance of the code with the set of rules (Chawla Col. 50 Line 4-17 While shown in FIG. 6A as distinct, these ingestion buffers 606 and 610 may be implemented as a common ingestion buffer. However, use of distinct ingestion buffers may be beneficial, for example, where a geographic region in which data is received differs from a region in which the data is desired. For example, use of distinct ingestion buffers may beneficially allow the intake ingestion buffer 606 to operate in a first geographic region associated with a first set of data privacy restrictions, while the output ingestion buffer 610 operates in a second geographic region associated with a second set of data privacy restrictions. In this manner, the intake system 110 can be configured to comply with all relevant data privacy restrictions, ensuring privacy of data processed at the data intake and query system 102.” Col. 89 Line 57-67 “In some embodiments, the dataset association records 902 can also be used to limit or restrict access to datasets and/or rules. For example, if a user uses one dataset association record 902 they may be unable to access or use datasets and/or rules from another dataset association record 902. In some such embodiments, if a query identifies a dataset association record 902 for use but references datasets or rules of another dataset association record 902, the data intake and query system 102 can indicate an error.” Col. 99 Lines 6-31 “As mentioned, the metadata catalog 521 can include annotations or information about the datasets, fields, users, or applications of the system 102 and can be revised as additional information is learned. Non-limiting examples of annotations that can be added to the dataset configuration records 904, other configurations, annotation tables or entries, or other locations of the metadata catalog 521 or system 102, include but are not limited to, the identification and use of fields in a dataset, number of fields in a dataset, related fields, related datasets, number (and identity) of dependent datasets, number (and identity) of datasets depended on, capabilities of a dataset or related dataset source or provider, the identification of datasets with similar configurations or fields, units or preferred units of data obtained from a dataset, alarm thresholds, data categories (e.g., restrictions), users or groups, applications, popular field, datasets, and applications (in total or by user or group), etc. In certain cases, the annotations can be added as the system 102 monitors system use (e.g., processing queries, monitoring query execution, user interaction, etc.) or as the system 102 detects changes to the metadata catalog 521 (e.g., one manual/automated change can lead to another automated change), etc. Additional information regarding example annotations are described in the Incorporated Applications, each of which is incorporated herein by reference for all purposes.”).
As per claim 37, the implementation of the system of claim 21 will execute the non-transitory computer-readable medium of claim 37. The claim is analyzed with respect to claim 21.
As per claim 38, the implementation of the system of claim 21 will execute the method of claim 38. The claim is analyzed with respect to claim 21.
10. Claims 26 and 36 are rejected under 35 U.S.C. 103 as being unpatentable over Chawla in view of Nandakumar, and further in view of U.S. Publication No. 20220253347 hereinafter Jones.
As per claim 26, Chawla in view of Nandakumar discloses:
The system of claim 21, wherein the at least one processor is configured (Chawla Fig. 1, Col. 6 Line 62 – Col. 7 Line 15)
Chawla in view of Nandakumar does not disclose:
apply a scale to zero capability to the image
Jones discloses:
apply a scale to zero capability to the image (para 0018 “The practice of running containers on demand, and in particular “scaling to zero” when idle, is known as “serverless”. A variety of open-source projects offer serverless
technologies, for example, the Knative Serving® project (Knative is a trademark of Google LLC), which provides “scale to zero” for workloads running on the Kubernetes container orchestration system. Scaling to zero involves allowing a scaling service to terminate all instances of a service when there are no requests for the service to process. It is accompanied by a corresponding ability to scale the service up to one or more instances once such a request arrives.”)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of configure some of the components Chawla in view of Nandakumar to include apply a scale to zero capability to the image, as taught by Jones.
The motivation would have been to reducing start latency of serverless microservices associated with software services.
As per claim 36, Chawla in view of Nandakumar discloses:
The system of claim 21, wherein the software platform (Chawla Fig. 14, Col. 136 Line 43-48
Chawla in view of Nandakumar does not disclose:
a Software as a Service (SaaS) platform
Jones discloses:
a Software as a Service (SaaS) platform (para 0137 and 0138 “ [0137] Service Models are as follows: [0138] Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.”)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of configure some of the components Chawla in view of Nandakumar to include a Software as a Service (SaaS) platform, as taught by Jones.
The motivation would have been to provided a capability provided to a user to use a provider's applications running on a cloud infrastructure.
Conclusion
11. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
A. U.S. Publication No. 20230014233 discloses on Paragraph 0068 “The process begins when the computer receives code of a serverless application function that corresponds to a service provided by an entity from a client device of an application developer via a network (step 402). The computer performs a scan of the code of the serverless application function to determine whether the code indicates that the serverless application function will run for more than a defined maximum threshold amount of time to generate a response to a request for the service or will call an external service for the response.”
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GARY S GRACIA whose telephone number is (571)270-5192. The examiner can normally be reached Monday-Friday 9am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Philip Chea can be reached at 5712723951. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GARY S GRACIA/Primary Examiner, Art Unit 2499