Notice of Pre-AIA or AIA Status
DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The Office Action is in response to RCE filed on 12/4/2025, application claiming priority based on provision application 63/058260 dated 7/29/2020, wherein claims 1-5, 7-12, 14-18, 20-23 are pending.
Information Disclosure Statement
The information disclosure statement filed 8/27/2020 (previously lined through in the annotated IDS sent out on March 7, 2024) fails to comply with 37 CFR 1.98(a)(3)(i) because it does not include a concise explanation of the relevance, as it is presently understood by the individual designated in 37 CFR 1.56(c) most knowledgeable about the content of the information, of each reference listed that is not in the English language. It has been placed in the application file, but the information referred to therein has not been considered. Applicant is advised that the date of any re-submission of any item of information contained in this information disclosure statement or the submission of any missing element(s) will be the date of submission for purposes of determining compliance with the requirements based on the time of filing the statement, including all certification requirements for statements under 37 CFR 1.97(e). See MPEP § 609.05(a).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-5, 8-12, 15-18, and 21-23 are rejected under 35 U.S.C. 103 as being unpatentable over Simpson et al. (US PGPUB 2021/0035225), in view of Sreedhar et al. (US PGPUB 2020/0226271), in view of Orti et al. (“Decision Service Lifecycle on IBM Cloud Private”, www.ibm.com/community/automation/docs/odm-2/best-practices/odm-lifecycle-icp/, May 29, 2018), further in view of Caldato et al. (US PGPUB 2019/0102157).
As for claim 1, Simpson teaches generating a rule unit as a microservice, wherein the rule unit comprises a partition of a rule base[rules repository] including a rule [any of the business decision rules depicted is understood as a rule of the rule base] (paragraph 74, “…the orchestration layer 120 and a service and rules engine layer 130 maybe built using a micro-service architecture…”, paragraph 142, “…each of …the rules engine sublayer 222 maybe built using a micro-service architecture…” and paragraph 127, “rules engine layer ….correspond to a subset of the set of business decision rules in the rules repository, wherein each subset will be comprised of one or more rules from the set of rules in the repository….in the example of Fig. 2…set of rules libraries 234 236 238 240….one or more components of the rules repository….may be separate components …” Here, rules engine layer can be a subset of rules such as one of the rules libraries 234-240, and each of which is understood as partition of (i.e., a subset of) the rule base (i.e., all of the application business rules), each of these rules engine sublayers 222 are implemented using a micro-service architecture) ;
deploying, by a processing device, the microservice [any rules of the “subset of rules”] on a platform (paragraphs 74 and 142 in view of paragraphs 145-167 teaching one or more rules can exist in each subset of rules. Because the prior art teaches the rules engine sublayer executing the rules evaluation run using a micro-service architecture, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to recognize the rule (s) of the subset of rules can be implemented as a microservice. In addition, Examiner note when the rules engine executes the evaluation of the exemplary subset of rules, it clearly has been deployed.), wherein deploying the containerized microservice on the container platform comprises:
associating, from a manifest file that identifies rules (paragraph 124, “…one or more decision rules from the rules repository, the one or more decision rules corresponding to…and being a subset of decisions rules in the rules repository…”), a data source [data repository data subset] of the containerized microservice to a corresponding backing channel (paragraph 27, “identify in the data repository a corresponding data subset required for …decision rule….and executing each of the one or more decision rules at least in part by processing at least …the corresponding data subset” in view of paragraph 125, “…the data repository 210 and one or more components of the rules libraries….maybe separate components in communication with the service and rules engine layer…” Examiner note “a data source….to a corresponding backing channel” is understood in view of the specification which states, “a …data source maybe accessed externally through a channel…where data is actually stored…such a channel maybe referred to as a backing channel of the data source…” (Specification paragraph 43). Thus, data source to a corresponding backing channel is understood as any networking connection between the data source and the rules library that take the data source as input to process said data source’s data, which is functionally similar to the prior art teaching the data subset and the data repository the subset is on and the rules libraries are communicatively coupled via a network.); and
enabling message passing between the microservice and an additional containerized microservice [service sublayer] (paragraph 142, “…each of service sublayer 214 …maybe built using a micro-service architecture…” and paragraph 143, “the rules engine sublayer …is configured for receiving requests from the service sublayer 214 …in order to generate a corresponding result…” requests and results in the bidirectional communication between the 2 layers implemented using micro-services are understood as forms of message passing.).
putting, via the enabled message passing, a message [request] on the shared channel for the microservice (Fig. 2 – “Rest Service” and paragraph 143, “…the rules engine sublayer 222 is configured for receiving requests…” teaches REST Service is used to pass requests to invoke the rules layer services and paragraph 8, “…execution of these rules including information provided in….result of rules executed…” teaching input into a rule can be the result of other rules executed (i.e., output from another rule). Here, while the prior art does not use the word message, the requests received by subset of rules to be executed and input of data into execution of a rule from result of other rules executed can both be understood as forms of “message” input into execution of a rule of the subset of rules corresponding to a service call);
executing the microservice based on the rule of the rule unit by accessing the message on the shared channel between the microservice and the additional microservice (paragraph 143, “….the rules engine sublayer …is configured for receiving requests…to apply a specific rule to specific parameters in order to generate a corresponding result…” in view of paragraph 142, “…service sublayer…and the rules engine sublayer…may be built using a micro-service architecture…”)
putting a second message [result of rules executed] on the shared channel for the additional microservice based on executing the microservice (paragraph 8, “…execution of these rules including information provided in….result of rules executed…” and paragraph 143, “…generate a corresponding result…the service sublayer …provide an interface between the orchestration layer ….and the rules engine sublayer…” Thus, it is clear here the service sublayer receives the result of the rules executed in response to the request send to the rules engine sublayer as mapped in the preceding claim limitation.)
Simpson’s subset of rules clearly communicate with each other to implement processing of data with the entirety of the subset of rules ((paragraphs 145-167, “…subset of rules…for example…: a CPhA conversion…mandatory claim elements…the identity of the carrier…a drug identification number…RAMQ eligibility; and/or student authorization and the likes…” “in yet another non-limiting example….cost verification…rules regarding cost calculations…trial prescriptions…pricing lookup…calculation parameters…eligible amounts…frequency limitations…payable amounts…” each teaches exemplary subset of rules including plurality of rules where the output of rules are used as inputs for other rules within respective exemplary subset of rules.), the rules can be implemented independently (paragraph 143, “…the individual rules implemented by the rules engine sublayer….are preferably configured to be substantially independent of each other…”), and the rules engine layer can be implemented using a micro-service architecture (paragraph 142), not only is it obvious to one of ordinary skill in the art that micro-service architecture allows implementation of multiple micro-service instances for respective functionalities, Simpson specifically considers the use of off-the-shelf business rules systems to implement the rules engine layer (paragraph 143, “…rules engine sublayer….implemented …..using commercial off-the-shelf business rules systems such as …IBM operational Decision Manager, Redhat Decision Manager or any open source BRMS…”) that are well-known to implement each rule as a microservice (see, E.g., Duncan Doyle, “Kogito-enablement” – README.MD, “…rule units….you will learn how the Kogito code-generation engine allows you to directly expose your business rules as a rules microservice for cloud deployment…”, github.com/KIE-Learning/kogito-enablement/commit/cdab2cebe78a8c8bc9d59097a654c754c1785c50, Jun 5, 2020.). Nevertheless, in the interest of compact prosecution, Examiner will note Simpson does not explicitly teach the rules engine layer implements each rule as a microservice wherein the rules within the respective microservices communicate with each other.
Sreedhar teaches a known method of implementation of rules utilizing micro-services (paragraph 65, “…microservices to …initialize rules and schemas …”), including message passing between the microservice and an additional containerized microservice over a shared channel [backplane/channel] between the containerized microservice and the additional containerized microservice (paragraph 35, “…microservices…configure them to communicate with each other via a backplane…”), the additional microservice being associated with a second rule unit comprising a second partition of the rule base including a second rule and a reference to a second partition of working memory (paragraph 65, “…microservice[s] to …initialize rules and schemas for …engines or …actions” teaching each service can correspond to a specific rule/schema/action. And Fig. 5 – transmission path 512/518/526/530 and paragraph 44-45, “…transmits 512 the channel data encapsulation packet…transmits 518 the packet…transmits, via path 526…transmits, via path 530, channel data encapsulation packet …” teaches path representing the location of input/output of data into/out of specific microservices.);
putting, via the enabled message passing, a message [packet on the shared channel for the microservice (paragraph 44, Fig. 5 -packet 510 from microservice 508 to 514, or packet 516 going from microservice 514 to DPI microservice 520)Executing the containerized microservice based on the rule of the rule unit by accessing the message on the shared channel between the microservice and the additional microservice; and
Putting a second message on the shared channel for the additional microservice based on the executing the microservice (paragraph 45, Fig 5 – packet 524 or packet 528 teaches data encapsulation packets flowing from microservice 508 to 514, and 514 back to 508, and from 514 to 520 and from 520 to 514. The packets 524 and 528 can be understood as the second message from the microservice (microservices 514/520) that first received packets 510/526 respectively as mapped above, before sending these subsequent packets to the additional microservice.). This known technique is applicable to the system of Simpson as they both share characteristics and capabilities, namely, they are directed to deployment of microservices to execute rules for the purpose of processing service requests.
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that applying the known technique of Sreedhar would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Sreedhar to the teachings of Simpson would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such containerized microservice features into similar systems. Further, applying message passing between the microservice and an additional microservice over a shared channel where the additional microservice is associated with a second rule unit, sending a message for the microservice, having the microservice execute rules/schema/actions by accessing the message on the shared channel, and putting a second message on the shared channel for the additional microservice based on the executing the microservice to Simpson with implementing of rule engine running plurality of rules for a service call utilizing commercial business rules engines implemented utilizing microservices the accordingly, would have been recognized by those of ordinary skill in the art as resulting in an improved system that would improve the dynamic deployment and execution of complex services with complicated policies/rules utilizing microservices (Sreedhar, paragraph 18)).
While Sreedhar mention the server components maybe implemented in full or in part using ‘cloud’ based components (paragraph 93). In the interest of compact prosecution, Examiner note Simpson and Sreedhar do not explicitly state the rules services layer is implemented in a containerized microservice on a cloud platform, initializing a container platform scheduler to schedule the containerized microservice for execution, and where the containerized microservice includes one or more references to partitions of working memory where working memory means location of memory to deploy the containerized microservice.
However, Orti teaches a method of implementing IBM ODM based rule services in containers [Docker] on a cloud platform [IBM Cloud Private] (Page 3, Figure depicting Container containing Decision Server, Page 15-16, “IT team step 3: configure and install the ODM production instance…installs the ODM …on a different IBM Cloud Private master node….”, and Page 25, “Delivery team step 3: Promote the decision service to production, “…deploys the RuleApp archive (and its XOM) to Decision Server hosted on the production instance…ODM production instance…”), initializing a container platform scheduler to schedule the containerized microservice for execution (Page 3, Figure depicting Container containing Decision Server, Page 15-16, “IT team step 3: configure and install the ODM production instance…installs the ODM …on a different IBM Cloud Private master node….”, and Page 25, “Delivery team step 3: Promote the decision service to production, “…deploys the RuleApp archive (and its XOM) to Decision Server hosted on the production instance…ODM production instance…” Here, promoting the service to production, where it is deployed inside a container, is scheduling the containerized microservice for execution. Moreover, “initializing a container platform scheduler to schedule…” is understood as causing the container platform scheduler to start scheduling an execution) and the containerized microservice includes one or more references to partitions of working memory (Page 25, “Delivery team step 3: Promote the decision service to production, “…deploys the RuleApp archive (and its XOM) to Decision Server hosted on the production instance…ODM production instance…” in view of Page 7, “…ODM components, each component is deployed in a…separate docker containers…” Here, Examiner note, it is well-known in the art each docker container references a set of one or more working memories because docker containers references/and uses a memory space shared with the Host OS. See, e.g., Ben Smith, “Web developer’s Guide to Docker – ‘containing’ the future of deployment”, www.linkedin.com/pulse/web-developers-guide-docker-containing-future-deployment-ben-smith/, December 1, 2015, Page 1, “…docker container ….share ….memory space with the Host OS…” (the shared memory space used by the container are understood as memory referenced by the container; Zhao et al. (US PGPUB 2019/0121541), paragraph 46, “containers…read the same data…create …in the memory” necessarily needs to reference both the data (wherever that is), and the memory it uses to create pages, both are considered working memory referenced by the container.)
It would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Orti’s teaching of implementing IBM ODM based rule services in containers on a cloud platform, initializing a container platform scheduler to schedule the containerized microservice for execution, and the containerized microservice includes one or more references to partitions of working memory to Simpson and Sreedhar because they are both directed to rules implementation inside microservice platforms and because doing so allows for easier development and testing of rules execution servers (Orti, Section - ”Introduction”).
While Simpson teaches processing of input data that uses plurality of microservices for a specific service call (paragraph 181, “….for each specific decision rule in the one or more decision rules identified …as corresponding to the specific service call….the specific decision rules is processed …is executed using information in the rules repository…if there are remaining rules identified at step 500 …that have not yet been processed, the process returns to step 600 and the next unprocessed rule….”) and Orti teaches implementation of the microservices using containers (Page 3, Figure depicting Container containing Decision Server, Page 15-16, “IT team step 3: configure and install the ODM production instance…installs the ODM …on a different IBM Cloud Private master node….”, and Page 25, “Delivery team step 3: Promote the decision service to production, “…deploys the RuleApp archive (and its XOM) to Decision Server hosted on the production instance…ODM production instance…”). Thus, it would have been obvious to a person of ordinary skill in the art to recognize that there can be a plurality of containers to be deployed to execute the service request, where the system schedules a containerized microservice for execution from amongst a plurality of containerized microservices because doing so allows for execution of the microservices/rules that are required to process the service request in a containerized environment. Nevertheless, in the interest of compact prosecution, Examiner note Simpson, Sreedhar, and Orti do not explicitly teach schedule a containerized microservice for execution from amongst a plurality of containerized microservices such that the plurality of containerized microservices all created when the scheduling occurs.
However, Caldato teaches a known method of microservice containers deployment for a service across multiple computing devices including schedule the containerized microservice for execution from amongst a plurality of containerized microservices including the containerized microservices (Abstract, “…distributing microservice containers ….include…deploying the plurality of containerized micro services across the plurality of computing environments…” here, any of the microservices of the plurality of containerized microservices is understood as “the containerized microservice for execution” and the other containerized microservices are the plurality of containerized micro services also to be deployed. See, also, paragraph 11). This known technique is applicable to the system of Simpson, Sreedhar, and Orti as they both share characteristics and capabilities, namely, they are directed to containerized deployment of microservices for the purpose of processing service requests.
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that applying the known technique of Caldato would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Caldato to the teachings of Simpson, Sreedhar, and Orti would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such containerized microservice features into similar systems. Further, applying scheduling the containerized microservice for execution from amongst a plurality of containerized microservices including the containerized microservice to Simpson, Sreedhar, and Orti with running and deploying plurality of microservices for processing service requests and implementing and scheduling the microservices in containers on system for execution accordingly, would have been recognized by those of ordinary skill in the art as resulting in an improved system that would improve granularity of deployment of task deployments that allows for easier management and upgrading and other functionalities individually to each of the more granularly implemented task codes (Caldato, paragraph 7).
As for claim 2, Simpson teaches identifying the rule unit [subset of rules] in the manifest file [rule database] (paragraph 125 and 141, “…configured for processing….to identify a subset of rules in the rules repository….”); and generating a rule unit microservice comprising one or more rules of the manifest file (paragraph 142, and Fig. 2 – Rules services 232 contain a subset of rules from the business rules Database 212).
Orti also teaches packaging the rule unit microservice into a rule unit container as the containerized microservice (Page 25, “Delivery team step 3: Promote the decision service to production…deploys the RuleApp archive…to decision server….”).
As for claim 3, Orti also teaches generating the one or more rules of the manifest file in an executable format (Pg. 24 “…deploys the decision service to the test instance….” and Pg. 25, “….deploys the RuleApp archive (and its XOM) to decision Server…on the production instance…” Examiner note in the former testing phase, the decision service deployable to run on a test instance of a container, is definitionally an executable. In the deployment phase, both archives and XOMs are well known as part of executable code.).
As for claim 4, Simpson also teaches wherein the executable format provides read and write access to the data source corresponding to the one or more rules via an application programming interface (API) (paragraphs 49-50, and 181, “…the specific decision rules is processed to identify in data repository…a specific data subset required for the execution of the specific decision rule…” and paragraph 143, “…REST…rules engine sublayer ….is configured for receiving requests…to apply a specific rule…” and Figure 2 – communication to/from rule services are REST services, which is understood as a well-known API. The rules services determine which data subset is required is understood as allowing read/write access to the specific data subset).
As for claim 5, Simpson also teaches API is a representational state transfer (REST) API (paragraphs 143-144, “…REST service module…” communication to/from rules module are understood as following the REST API guidelines).
As for claims 8-11, and claims 15-18, they are the system and product claims of method claims 1-4 above respectively. Thus, they are rejected under the same rationales.
As for claim 12, it contain similar limitation as claim 5 above. Thus, it is rejected under the same rationales.
As for claim 21, Simpson also teaches at least one of read functionality or write functionality of the data source is exposed via a message bus (paragraphs 49-50, and 181, “…the specific decision rules is processed to identify in data repository…a specific data subset required for the execution of the specific decision rule…” and Figure 2 – communication to/from rule services are REST services, which is understood as a well-known API and the rules services determine which data subset is required to work on is understood as read to the specific data subset. Alternatively, paragraph 125, “a data repository…maybe …in communications with the …rules engine layer 130 over a data network”. Here, Examiner note the present application merely recites “message bus” without giving any exemplary embodiments, and the only recitation beyond merely repeating the words message bus states, “the rule unit compiler…builds the rules….that exposes read/write to data sources…via API (e.g., message bus or REST API)…” (paragraph 48) Thus, under the BRI, a message bus is any form of API. Therefore, both REST API and data network protocol, whatever it maybe, can be understood as APIs that reads upon “a message bus”.).
As for claims 22-23, they contain similar limitations as claim 21 above. Thus, they are rejected under the same rationales.
Claim(s) 7, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Simpson, Sreedhar, Orti, and Caldato, in view of Wang et al. (US PGPUB 2022/0337476).
As for claim 7, Simpson teaches storing the fact [request/data subset] in a local working memory associated with the containerized microservice (paragraph 143. Examiner note for 2 reasons, the prior art teaches the limitation. First, all data/requests are communicated to the rule microservices via the REST API. Thus, any operation of the data received, are on the receiving end of the API, inside the operating memory space of the rule microservice, and is understood as in a local working memory. Second, Examiner note that the data are stored in a Data repository (See, Fig. 2), that the rule microservice applies the rules to. Data Repository is also in “a local working memory associated with the containerized microservice”);
Seerdhar teaches updating the fact in the local working memory in view of executing the containerized microservice (paragraph 44-45);
Propagating updated fact to the additional microservice via the shared channel (paragraph 45).
Orti teaches the microservices are implemented as containerized microservices (pg. 25).
While Simpson, Sreedhar, Orti, and Caldato teaches receiving data/requests to be processed by the rules microservice, Simpson, Sreedhar, Orti, and Caldato do not explicitly teach identifying a fact in a queue associated with the containerized microservice.
However, Wang teaches a known method of business rules processing including identifying a fact in a queue associated with the microservice (paragraph 112). This known technique is applicable to the system of Simpson, Sreedhar, Orti, and Caldato as they both share characteristics and capabilities, namely, they are directed to Business rule engine based data processing.
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that applying the known technique of Wang would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Wang to the teachings of Simpson, Sreedhar, Orti, and Caldato would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such business rule engine processing features into similar systems. Further, applying identifying a fact in a queue associated with the microservice to Simpson, Sreedhar, Orti, and Caldato with processing of a fact in the containerized microservice accordingly, would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow improved structured communication between rules engine results of different entities. (Wang, paragraph 111).
As for claims 14 and 20, they contain similar claim limitations as claim 7 above. Thus, they are rejected under the same rationales.
In addition, Simpson also teaches updating the fact in the local memory in view of the execution of the microservice (paragraph 143).
Orti also teaches the microservice is executed in a container (Pg. 25 and 7).
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEVIN X LU whose telephone number is (571)270-1233. The examiner can normally be reached M-F 10am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lewis Bullock can be reached on 5712723759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197
(toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KEVIN X LU/Examiner, Art Unit 2199
/LEWIS A BULLOCK JR/Supervisory Patent Examiner, Art Unit 2199