DETAILED ACTION
Claims 1-20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 8-10, 12, and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Vijaywargiya (WO 2023014940 A1) in view of Sakalley (US 20230133020 A1).
Regarding Claim 1, Vijaywargiya teaches a computer-implemented method, comprising: publishing user code associated with a task included in a content creation pipeline (
Vijaywargiya discloses, “At least one embodiment pertains to software versioning and deployment. Embodiments relate to automated continuous integration and continuous deployment (CI/CD) pipelines used to version and package infrastructure components for a data center,” ¶ 0001, “Accordingly, as set forth above, embodiments of the present invention provide solutions that include the use of automated pipelines to version and package individual components, publish them to a repository, and then create a distributable artifact repository bundle using all the disparate components. The single artifact repository solution can then be conveniently shipped to remote sites using an over-the-air workflow,” ¶ 0005, “The various disparate infrastructure components may further include automation source code, system configurations, and various types of package installations used to implement a sequence of complex workflows to set up the remote data center 250,” ¶ 0045, and “Once each of the various disparate infrastructure components are versioned and packaged, they are published (e.g., stored) in the external artifact repository 290,” ¶ 0046.
The claimed “content creation pipeline” is mapped to the disclosed “automated continuous integration and continuous deployment (CI/CD) pipeline”. This pipeline is a content creation pipeline because it is used to create artifacts/contents from various components for distribution and deployment.),
wherein the task is one of a plurality of tasks associated with the content creation pipeline, wherein the content creation pipeline is configured in a cloud computing environment, and wherein tasks associated with the content creation pipeline can execute in a plurality of computing environments (
Vijaywargiya discloses, “The provider cloud server 310 may include a source artifact repository 315 and a packaging and bundling component 320 similar to packaging and bundling component 214 of FIG. 2. The packaging and bundling component 320 may include a continuous integration and continuous delivery (CECD) pipeline 322,” ¶ 0056, “Responsive to a request from an end-user of the remote data center, the deployment manager identifies a cluster (e.g., a subset of the plurality of nodes) of the remote data center to automate the provisioning and management of one or more resources associated with an application or computing platform. The deployment management identifies a top-level service among the one or more resources to be provisioned (e.g., installed or deployed) on the cluster,” ¶ 0058, “In embodiments, the deployment manager 500 is configured to provision and manage the plurality of infrastructure components in workload clusters 570A-C each including one or more containers … Each service may include one or more dependent resources necessary to provision the workload cluster,” ¶ 0072, and “In at least one embodiment, deployment system 1306 may be configured to offload processing and compute resources among a distributed computing environment to reduce infrastructure requirements at facility 1302,” ¶ 0130, and “In at least one embodiment, system 1400 (e.g., training system 1304 and/or deployment system 1306) may implemented in a cloud computing environment (e.g., using cloud 1426),” ¶ 0145.
The claimed “plurality of tasks” is mapped to a plurality of tasks to provision (install or deploy) resources on a chosen cluster, according to requests.
The claimed “plurality of computing environments” is mapped to the environments within the workload clusters. Each of the tasks to provision resources on a cluster can occur in different clusters of the overall environment.);
packaging the user code separately for each of the plurality of computing environments, wherein each user code package enables the user code to execute within the respective computing environment without modification to the user code (
Vijaywargiya discloses, “To solve the problem of provisioning and managing HCI data center components efficiently, embodiments of the present invention use a package artifact repository — also known as a repository. Such a solution elegantly packages versioned individual components first, populates an internal artifact repository automatically, and then creates a distributable container. This all-in-one container includes components able to set up the artifact repository at a remote cluster,” ¶ 0007, “Once the base OS is installed on the command node, an automation script of the ISO image may automatically trigger the installer to install core services 444,” ¶ 0071, and “In embodiments, the deployment manager 500 is configured to provision and manage the plurality of infrastructure components in workload clusters 570A-C each including one or more containers (e.g., container 572A and 574A for workload cluster 570A, container 572B and 574AB for workload cluster 570B, and container 572C and 574C for workload cluster 570C) using a service in response to a request to provision a workload cluster (e.g., workload cluster 570A) for an application,” ¶ 0072.
Each of the workload clusters has different containers (e.g. 572A and 574A, 572B and 574AB) that were packaged separately for the workload cluster.);
storing artifacts (
Vijaywargiya discloses, “The application management platform 200 may identify various disparate infrastructure components necessary to set up and/or update remote data center 250. The various disparate infrastructure components may include networking components, compute components, storage components, security components, versioning components, provisioning components, etc. … These various disparate infrastructure components are stored in a source artifact repository 212 once developed by the developers,” ¶ 0045, and “In embodiments, the deployment manager 500 is configured to provision and manage the plurality of infrastructure components in workload clusters 570A-C each including one or more containers (e.g., container 572A and 574A for workload cluster 570A,” ¶ 0072.
The claimed “task registry” is mapped to the disclosed “source artifact repository”, which acts as a registry for storing the disparate infrastructure components for later provisioning tasks.);
responsive to the task being selected for execution, determining, based on a set of rules, a computing environment from the plurality of computing environments in which to execute the task (
Vijaywargiya discloses, “Responsive to a request from an end-user of the remote data center, the deployment manager identifies a cluster (e.g., a subset of the plurality of nodes) of the remote data center to automate the provisioning and management of one or more resources associated with an application or computing platform. The deployment management identifies a top-level service among the one or more resources to be provisioned (e.g., installed or deployed) on the cluster. The custom controller of the operator associated with the top-level services determines that a current state of the cluster does not match the target state of the cluster (e.g., the cluster is empty or does not include the top-level service). The custom controller associated with the top-level service synchronizes the current state of the cluster associated with the top-level service with the target state of the cluster associated with the top-level service, which may include installing the dependent resources associated with the top-level service to the cluster,” ¶ 0028.
The claimed “set of rules” is mapped to the disclosed rule determining whether the current state of the chosen cluster does not match the target state of the cluster. If so, the cluster is chosen for provisioning the resource.);
launching the computing environment in accordance with the set of rules (
Vijaywargiya discloses, “As a result, the deployment manager generates a custom resource definition (CRD) for each of the dependent resources associated with the top-level service and provides the generated CRD to the custom controllers associated with each of the dependent resources. The custom controllers associated with the dependent resources synchronize the current state of the cluster associated with the dependent resources with the target state of the cluster associated with the dependent resources,” ¶ 0028, and “The client-side deployment manager component 262 may notify the server-side deployment manager component 218 that the workload cluster is set up and provisioned for the application,” ¶ 0051.);
and executing the task in the computing environment using the artifacts (
Vijaywargiya discloses, “These various disparate infrastructure components are stored in a source artifact repository 212 once developed by the developers,” ¶ 0045, “The application management platform 200 may utilize the packaging and bundling component 214 to version, package, and bundle the various disparate infrastructure components into a distributable container,” ¶ 0046, “The command node 260, once set up and provisioned, may include the client-side deployment manager component 262, the client-side update component 264, and a container 266 storing the versioned distributable container,” ¶ 0048, and “The client-side deployment manager component 262 may provision the workload cluster 270 with the dependent resources and any necessary infrastructure components from the container 266 based on the identified top-level service and dependent resources,” ¶ 0051.
Here, the various disparate infrastructure components (mapped to the claimed “artifacts”) are bundled into a container, which is then used to provision the workload cluster 270 as part of the provisioning task.).
Vijaywargiya does not teach storing artifact locations associated with the user code for each of the plurality of computing environments in a task registry, and executing the task in the computing environment using the artifact locations from the task registry.
However, Sakalley teaches storing artifact locations associated with the user code for each of the plurality of computing environments in a task registry, and executing the task in the computing environment using the artifact locations from the task registry (
Sakalley discloses, “A registry of artifacts may be stored in a distributed database, a centralized database, or otherwise available to one or more IPUs that are orchestrating and scheduling tasks. The registry may include identifiers of the artifacts, their location, and other metadata about the artifacts, such as reliability, capability, security features, geographical location, service costs, and the like,” ¶ 0129.
After the combination of Vijaywargiya with Sakalley, Sakalley’s artifact registry is now used to store the locations of Vijaywargiya’s artifacts, and the provisioning of resources now uses these locations.).
Vijaywargiya and Sakalley are both considered to be analogous to the claimed invention because they are in the same field of cloud computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Vijaywargiya to incorporate the teachings of Sakalley and provide storing artifact locations associated with the user code for each of the plurality of computing environments in a task registry, and executing the task in the computing environment using the artifact locations from the task registry. Doing so would help allow easier access of the artifacts by using their location stored in the registry (Sakalley discloses, “The registry may include identifiers of the artifacts, their location, and other metadata about the artifacts, such as reliability, capability, security features, geographical location, service costs, and the like,” ¶ 0129.).
Claims 10 and 19 are a non-transitory computer-readable storage medium (Vijaywargiya ¶ 0180) and a system claim (Vijaywargiya ¶ 0180), respectively, corresponding to the computer-implemented method Claim 1. Therefore, Claims 10 and 19 are rejected for the same reason set forth in the rejection of Claim 1.
Regarding Claim 3, Vijaywargiya in view of Sakalley teaches the computer-implemented method of claim 1, wherein executing the task in the computing environment comprises: accessing the task registry to retrieve artifact locations associated with the user code corresponding to the task; retrieving a code package associated with the computing environment; and executing the code package in the computing environment (
Vijaywargiya discloses, “The application management platform 200 may identify various disparate infrastructure components necessary to set up and/or update remote data center 250… These various disparate infrastructure components are stored in a source artifact repository 212 once developed by the developers,” ¶ 0045, “The application management platform 200 may utilize the packaging and bundling component 214 to version, package, and bundle the various disparate infrastructure components into a distributable container,” ¶ 0046, “In some embodiments, the application management platform 200 may utilize the packaging and bundling component 214 to create an image (e.g., an ISO image) including a versioned distributable container which can be provided to a customer of the remote data center 250 and/or to a node of the remote data center 250 to set up the remote data center 250… The command node 260, once set up and provisioned, may include the client-side deployment manager component 262, the client-side update component 264, and a container 266 storing the versioned distributable container,” ¶ 0048, and “The client-side deployment manager component 262 may provision the workload cluster 270 with the dependent resources and any necessary infrastructure components from the container 266 based on the identified top-level service and dependent resources,” ¶ 0051.
Here, the source artifact repository (mapped to task registry) is accessed to retrieve the artifacts, which are then bundled into a distributed container (mapped to code package). The container is then retrieved by a customer to provision a workload cluster as part of a provisioning task.
After the combination of Vijaywargiya with Sakalley, these steps are now performed using Sakalley’s artifact registry, which allows for accessing the “artifact locations” for easier retrieval.).
Claim 12 is a non-transitory computer-readable storage medium (Vijaywargiya ¶ 0180) corresponding to the computer-implemented method Claim 3. Therefore, Claim 12 is rejected for the same reason set forth in the rejection of Claim 3.
Regarding Claim 8, Vijaywargiya in view of Sakalley teaches the computer-implemented method of claim 1, wherein the artifact locations comprise locations of compiled versions of the user code, setup scripts associated with the user code, test suites associated with the user code, generated objects associated with the user code, logs generated during testing and quality assurance associated with the user code, or other metadata associated with the user code (
Sakalley discloses, “Artifacts may be in the form of a bitstream, bit file, programming file, or executable file, binary file, other configuration file used to configure an FPGA, CGRA, ASIC, or general CPU to execute an acceleration operation,” ¶ 0128, and “A registry of artifacts may be stored in a distributed database, a centralized database, or otherwise available to one or more IPUs that are orchestrating and scheduling tasks. The registry may include identifiers of the artifacts, their location, and other metadata about the artifacts, such as reliability, capability, security features, geographical location, service costs, and the like. The registry may be stored in a datastore, datalake, database, or the like, such as illustrated and described in FIGS. 11, 17, and 18,” ¶ 0129.).
Vijaywargiya and Sakalley are both considered to be analogous to the claimed invention because they are in the same field of cloud computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Vijaywargiya to incorporate the teachings of Sakalley and provide wherein the artifact locations comprise locations of compiled versions of the user code, setup scripts associated with the user code, test suites associated with the user code, generated objects associated with the user code, logs generated during testing and quality assurance associated with the user code, or other metadata associated with the user code. Doing so would help allow easier access of the artifacts by using their location stored in the registry (Sakalley discloses, “The registry may include identifiers of the artifacts, their location, and other metadata about the artifacts, such as reliability, capability, security features, geographical location, service costs, and the like,” ¶ 0129.).
Claim 17 is a non-transitory computer-readable storage medium (Vijaywargiya ¶ 0180) corresponding to the computer-implemented method Claim 8. Therefore, Claim 17 is rejected for the same reason set forth in the rejection of Claim 8.
Regarding Claim 9, Vijaywargiya in view of Sakalley teaches the computer-implemented method of claim 1, wherein the content creation pipeline is associated with a content streaming platform (
Vijaywargiya discloses, “In at least one embodiment, software 132 included in software layer 130 may include software used by at least portions of node C.R.s 116(1)-116(N), grouped computing resources 114, and/or distributed file system 128 of framework layer 120. The one or more types of software may include, but are not limited to, Internet web page search software, e-mail virus scan software, database software, and streaming video content software,” ¶ 0037.).
Claim 18 is a non-transitory computer-readable storage medium (Vijaywargiya ¶ 0180) corresponding to the computer-implemented method Claim 9. Therefore, Claim 18 is rejected for the same reason set forth in the rejection of Claim 9.
Claims 2, 11, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Vijaywargiya (WO 2023014940 A1) in view of Sakalley (US 20230133020 A1) and Youakim (US 20210096897 A1).
Regarding Claim 2, Vijaywargiya in view of Sakalley teaches the computer-implemented method of claim 1. Vijaywargiya in view of Sakalley does not teach wherein the plurality of computing environments comprises a cloud-computing environment, a data center, a private cloud, and a third-party computing environment.
However, Youakim teaches wherein the plurality of computing environments comprises a cloud-computing environment, a data center, a private cloud, and a third-party computing environment (
Youakim discloses, “In the cloud computing environment 50, one or more clients 52A-52C (such as those described above) are in communication with a cloud network 54. The cloud network 54 may include backend platforms, e.g., servers, storage, server farms or data centers. The users or clients 52A-52C can correspond to a single organization/tenant or multiple organizations/tenants. More particularly, in one example implementation the cloud computing environment 50 may provide a private cloud serving a single organization (e.g., enterprise cloud). In another example, the cloud computing environment 50 may provide a community or public cloud serving multiple organizations/tenants. In still further embodiments, the cloud computing environment 50 may provide a hybrid cloud that is a combination of a public cloud and a private cloud. Public clouds may include public servers that are maintained by third parties to the clients 52A-52C or the enterprise/tenant. The servers may be located off-site in remote geographical locations or otherwise,” ¶ 0039.).
Vijaywargiya in view of Sakalley, and Youakim are both considered to be analogous to the claimed invention because they are in the same field of cloud computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Vijaywargiya in view of Sakalley to incorporate the teachings of Youakim and provide wherein the plurality of computing environments comprises a cloud-computing environment, a data center, a private cloud, and a third-party computing environment. Doing so would help increase the flexibility of the setup of the computing environments. (Youakim discloses, “The cloud network 54 may include backend platforms, e.g., servers, storage, server farms or data centers. The users or clients 52A-52C can correspond to a single organization/tenant or multiple organizations/tenants. More particularly, in one example implementation the cloud computing environment 50 may provide a private cloud serving a single organization (e.g., enterprise cloud). In another example, the cloud computing environment 50 may provide a community or public cloud serving multiple organizations/tenants. In still further embodiments, the cloud computing environment 50 may provide a hybrid cloud that is a combination of a public cloud and a private cloud,” ¶ 0039.).
Claims 11 and 20 are a non-transitory computer-readable storage medium (Vijaywargiya ¶ 0180) and a system claim (Vijaywargiya ¶ 0180), respectively, corresponding to the computer-implemented method Claim 2. Therefore, Claims 11 and 20 are rejected for the same reason set forth in the rejection of Claim 2.
Claims 4 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Vijaywargiya (WO 2023014940 A1) in view of Sakalley (US 20230133020 A1) and Anand (US 20210342193 A1).
Regarding Claim 4, Vijaywargiya in view of Sakalley teaches the computer-implemented method of claim 3, wherein executing the code package in the computing environment comprises: (
Vijaywargiya discloses, “In at least one embodiment, because one or more of applications or containers in deployment pipeline(s) 1410 may share same services and resources, application orchestration system 1428 may orchestrate, load balance, and determine sharing of services or resources between and among various applications or containers. In at least one embodiment, a scheduler may be used to track resource requirements of applications or containers, current usage or planned usage of these resources, and resource availability,” ¶ 0155.
The resource requirements of each application and container, as well as the resource availability, are determined. Generally this is for the workload cluster that the applications and containers are provisioned on.);
mounting storage for data to be written out by an execution of the user code associated with the code package (
Vijaywargiya discloses, “In at least one embodiment, shared storage may be mounted to Al services 1418 within system 1400. In at least one embodiment, shared storage may operate as a cache (or other storage device type) and may be used to process inference requests from applications,” ¶ 0158, and “In at least one embodiment, during application execution, an inference request for a given application may be received, and a container (e.g., hosting an instance of an inference server) may be loaded (if not already), and a start procedure may be called. In at least one embodiment, pre-processing logic in a container may load, decode, and/or perform any additional pre-processing on incoming data (e.g., using a CPU(s) and/or GPU(s)),” ¶ 0160.);
retrieving data to be read in during the execution of the user code from one or more other computing environments (
Vijaywargiya discloses, “In at least one embodiment, communication between facilities and components of system 1400 (e.g., for transmitting inference requests, for receiving results of inference requests, etc.) may be communicated over data bus(ses), wireless data protocols (Wi-Fi), wired data protocols (e.g., Ethernet), etc.,” ¶ 0145, “In at least one embodiment, a software layer may be implemented as a secure, encrypted, and/or authenticated API through which applications or containers may be invoked (e.g., called) from an external environment s),” ¶ 0150, “In at least one embodiment, shared storage may be mounted to Al services 1418 within system 1400. In at least one embodiment, shared storage may operate as a cache (or other storage device type) and may be used to process inference requests from applications,” ¶ 0158, and “In at least one embodiment, during application execution, an inference request for a given application may be received, and a container (e.g., hosting an instance of an inference server) may be loaded (if not already), and a start procedure may be called. In at least one embodiment, pre-processing logic in a container may load, decode, and/or perform any additional pre-processing on incoming data (e.g., using a CPU(s) and/or GPU(s)),” ¶ 0160.
Here, incoming data can be read in from externa environments during the execution of the application.);
and mounting the data to be read in during the execution of the user code in a local storage environment (
Vijaywargiya discloses, “In at least one embodiment, system 1400 (e.g., training system 1304 and/or deployment system 1306) may implemented in a cloud computing environment (e.g., using cloud 1426). In at least one embodiment, system 1400 may be implemented locally with respect to a healthcare services facility,” ¶ 0145,
“In at least one embodiment, system 1400 may be configured to access and referenced data from PACS servers to perform operations, such as training machine learning models, deploying machine learning models, image processing, inferencing, and/or other operations,” ¶ 0149.
Here, the locally-implemented system accesses the data from the PACS servers, which is mounted to the system. The data is read in and stored on the system itself for processing.).
Vijaywargiya in view of Sakalley does not teach accessing a storage registry associated with the computing environment to determine an availability of storage in the computing environment, and wherein the local storage environment is determined using the storage registry.
However, Anand teaches accessing a storage registry associated with the computing environment to determine an availability of storage in the computing environment, and wherein the local storage environment is determined using the storage registry (
Anand discloses, “Data repository 108 in global availability registry 2123 can store an iteratively updated list of available compute nodes within system 100 available to host a container based application. Global availability registry 2123 can store data on predicted availability of compute nodes within system 100 across a plurality of availability performance metrics, e.g., CPU availability, memory availability, storage availability, and I/O availability,” ¶ 0035.
After the combination of Vijaywargiya in view of Sakalley, with Anand, the global availability registry from Anand is accessed to determine storage availability in the computing environment, and the storage environment is determined using this registry.).
Vijaywargiya in view of Sakalley, and Anand are both considered to be analogous to the claimed invention because they are in the same field of container orchestration. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Vijaywargiya in view of Sakalley to incorporate the teachings of Anand and provide accessing a storage registry associated with the computing environment to determine an availability of storage in the computing environment, and wherein the local storage environment is determined using the storage registry. Doing so would help provide a convenient method of determining the amount of storage for each computing environment, which would allow for better allocation of computing resources. (Anand discloses, “Data repository 108 in global availability registry 2123 can store an iteratively updated list of available compute nodes within system 100 available to host a container based application. Global availability registry 2123 can store data on predicted availability of compute nodes within system 100 across a plurality of availability performance metrics, e.g., CPU availability, memory availability, storage availability, and I/O availability,” ¶ 0035.).
Claim 13 is a non-transitory computer-readable storage medium (Vijaywargiya ¶ 0180) corresponding to the computer-implemented method Claim 4. Therefore, Claim 13 is rejected for the same reason set forth in the rejection of Claim 4.
Claims 5-6 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Vijaywargiya (WO 2023014940 A1) in view of Sakalley (US 20230133020 A1), Anand (US 20210342193 A1), Chandnani (US 9436449 B1), and Kotni (US 20230342373 A1).
Regarding Claim 5, Vijaywargiya in view of Sakalley and Anand teaches the computer-implemented method of claim 4. Vijaywargiya in view of Sakalley and Anand does not teach further comprising: mapping each file needed for the execution of the user code to a scheme using a protocol handler, wherein the scheme conveys information regarding an environment in which a respective file is located.
However, Chandnani teaches further comprising: mapping each file needed for the execution of the user code to a scheme using a protocol handler(
Chandnani discloses, “In some examples, the application 120 is a web application 202, and the system includes a mapping 228 between cached files 222 and dynamic uniform resource locators 140 which are functionally equivalent with respect to producing the scenario behavior log 136 from execution of the web application. The mapping 228 may be implemented using a table, tree, dictionary, collection of key-value pairs, or other suitable data structure,” Col 12, Lines 42-49.
Here, the “cached files 222”, which are required for execution of a web application, are mapped to a scheme using a “mapping 228” that acts as a protocol handler with “dynamic uniform resource locators 140”, which are uniform resource identifiers.
After the combination of Vijaywargiya in view of Sakalley and Anand, with Chandnani, the files needed for execution of the user code for the provisioning task, as taught by Vijaywargiya in view of Sakalley and Anand, are mapped to a scheme using the mapping as specified by Chandnani.).
Vijaywargiya in view of Sakalley and Anand, and Chandnani are both considered to be analogous to the claimed invention because they are in the same field of software architecture. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Vijaywargiya in view of Sakalley and Anand to incorporate the teachings of Chandnani and provide further comprising: mapping each file needed for the execution of the user code to a scheme using a protocol handler. Doing so would help allow for easier access of each file (Chandnani discloses, “In some examples, the application 120 is a web application 202, and the system includes a mapping 228 between cached files 222 and dynamic uniform resource locators 140 which are functionally equivalent with respect to producing the scenario behavior log 136 from execution of the web application. The mapping 228 may be implemented using a table, tree, dictionary, collection of key-value pairs, or other suitable data structure,” Col 12, Lines 42-49.).
Vijaywargiya in view of Sakalley, Anand, and Chandnani does not teach wherein the scheme conveys information regarding an environment in which a respective file is located.
However, Kotni teaches wherein the scheme conveys information regarding an environment in which a respective file is located (
Kotni discloses, “The metadata information may include the name of a file, a size of the file, file permissions associated with the file, when the file was last modified, and file mapping information associated with an identification of the location of the file stored within a cluster of physical machines,” ¶ 0057.).
Vijaywargiya in view of Sakalley, Anand, and Chandnani, and Kotni are both considered to be analogous to the claimed invention because they are in the same field of cloud computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Vijaywargiya in view of Sakalley, Anand, and Chandnani to incorporate the teachings of Kotni and provide wherein the scheme conveys information regarding an environment in which a respective file is located. Doing so would help allow for easier access of each file (Kotni discloses, “The metadata information may include the name of a file, a size of the file, file permissions associated with the file, when the file was last modified, and file mapping information associated with an identification of the location of the file stored within a cluster of physical machines,” ¶ 0057.).
Claim 14 is a non-transitory computer-readable storage medium (Vijaywargiya ¶ 0180) corresponding to the computer-implemented method Claim 5. Therefore, Claim 14 is rejected for the same reason set forth in the rejection of Claim 5.
Regarding Claim 6, Vijaywargiya in view of Sakalley, Anand, Chandnani, and Kotni teaches the computer-implemented method of claim 5, wherein the mapping is performed using a uniform resource identifier (URI) (
Chandnani discloses, “In some examples, the application 120 is a web application 202, and the system includes a mapping 228 between cached files 222 and dynamic uniform resource locators 140 which are functionally equivalent with respect to producing the scenario behavior log 136 from execution of the web application,” Col 12, Lines 42-47.).
Vijaywargiya in view of Sakalley and Anand, and Chandnani are both considered to be analogous to the claimed invention because they are in the same field of computer networks. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Vijaywargiya in view of Sakalley and Anand to incorporate the teachings of Chandnani and provide wherein the mapping is performed using a uniform resource identifier (URI). Doing so would help allow for easier access of each file (Chandnani discloses, “In some examples, the application 120 is a web application 202, and the system includes a mapping 228 between cached files 222 and dynamic uniform resource locators 140 which are functionally equivalent with respect to producing the scenario behavior log 136 from execution of the web application. The mapping 228 may be implemented using a table, tree, dictionary, collection of key-value pairs, or other suitable data structure,” Col 12, Lines 42-49.).
Claim 15 is a non-transitory computer-readable storage medium (Vijaywargiya ¶ 0180) corresponding to the computer-implemented method Claim 6. Therefore, Claim 15 is rejected for the same reason set forth in the rejection of Claim 6.
Claims 7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Vijaywargiya (WO 2023014940 A1) in view of Sakalley (US 20230133020 A1) and Nadimpalli (US 20190332433 A1).
Regarding Claim 7, Vijaywargiya in view of Sakalley teaches the computer-implemented method of claim 1. Vijaywargiya in view of Sakalley does not teach wherein the task is selected for execution by a state machine associated with the content creation pipeline based on an event or condition occurring that is monitored by the state machine.
However, Nadimpalli teaches wherein the task is selected for execution by a state machine associated with the content creation pipeline based on an event or condition occurring that is monitored by the state machine (
Nadimpalli discloses, “In some embodiments, the stateless application module may select a state machine configuration corresponding to the workflow, and assign an initial state in the configuration to the event. The stateless application module may also create, for the event, a record that includes state data (e.g., context data) that identifies a current state of the event, and may store the record in a persistent data storage (e.g., a hard drive, a flash drive, etc.), such as the context database,” ¶ 0014.).
Vijaywargiya in view of Sakalley, and Nadimpalli are both considered to be analogous to the claimed invention because they are in the same field of application frameworks. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Vijaywargiya in view of Sakalley to incorporate the teachings of Nadimpalli and provide wherein the task is selected for execution by a state machine associated with the content creation pipeline based on an event or condition occurring that is monitored by the state machine. Doing so would help ensure that appropriate actions are taken based on state (Nadimpalli discloses, “The stateless application framework may be utilized by different applications for implementing different workflows. Each workflow may be associated with one or more state machine configurations representing different states within the workflow. Upon receiving an indication of an event from an application, the stateless application module may initiate a corresponding workflow for the application,” ¶ 0013.).
Claim 16 is a non-transitory computer-readable storage medium (Vijaywargiya ¶ 0180) corresponding to the computer-implemented method Claim 7. Therefore, Claim 16 is rejected for the same reason set forth in the rejection of Claim 7.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Rao (US 20230205601 A1): Stateless Content Management System
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW SUN whose telephone number is (571)272-6735. The examiner can normally be reached Monday-Friday 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW NMN SUN/Examiner, Art Unit 2195
/Aimee Li/Supervisory Patent Examiner, Art Unit 2195