Prosecution Insights
Last updated: April 19, 2026
Application No. 17/642,711

PROVIDING OPTIMIZATION IN A MICRO SERVICES ARCHITECTURE

Final Rejection §103
Filed
Mar 14, 2022
Examiner
AMIN, MUSTAFA A
Art Unit
2194
Tech Center
2100 — Computer Architecture & Software
Assignee
A P Møller - Mærsk A/S
OA Round
4 (Final)
63%
Grant Probability
Moderate
5-6
OA Rounds
3y 7m
To Grant
93%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
281 granted / 443 resolved
+8.4% vs TC avg
Strong +29% interview lift
Without
With
+29.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
30 currently pending
Career history
473
Total Applications
across all art units

Statute-Specific Performance

§101
15.7%
-24.3% vs TC avg
§103
46.1%
+6.1% vs TC avg
§102
14.0%
-26.0% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 443 resolved cases

Office Action

§103
Detailed Action This action is in response amendments filed on 12/17/2025 This action is in response to application filed on 03/14/2022, which is a 371 of PCT/EP2020/076292 filed on 09/21/2020, which is claims foreign priority to DENMARK application no. PA201970581 filed on 09/20/2019. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-12 are pending. Claims 1-12 are rejected. Applicant's Response In Applicant's Response dated 12/17/2025, Applicant amended claims 1, 5, and 12. Applicant argued against various rejections previously set forth in the Office Action mailed on 09/23/2025. In light of applicant’s amendments/remarks, all rejection of claims under 35 U.S.C. 101 set forth previously are withdrawn. Examiner Notes Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-12 are rejected under 35 U.S.C. 103 as being unpatentable over Hayes et al. (US 20180336493 A1, referred hereinafter as D1) in view of Sato (US 20160094228 A1, referred hereinafter as D2) in view of Parikh et al. (US 20190384632 A1, referred hereinafter as D3) in view of McIntyre et al. (US 10318644 B1, referred hereinafter as D4) in view of Narayanan et al. (US 20070297458 A1, referred hereinafter as D5) in view of Olivier et al. (US 20150146580 A1, referred hereinafter as D6). As per claim 1, D1 discloses, A computer-implemented method for providing optimization in a micro services architecture comprising at least one optimization service, (D1, abstract, figure 1, 0013-0016 shows/discloses optimization platform) wherein the at least one optimization service comprises a managing component configured to provide access to the at least one optimization service to a client, (D1, abstract, figure 1, 0013-0019, 0022 shows/discloses optimization platform accessible via Rest API by clients in a client-server architecture). a messaging component configured to queue optimization requests, (D1, abstract, figure 1, 0013-0019 shows/discloses optimization platform accessible via Rest API by clients in a client-server architecture, and having work Queue 135 to queue requests.). at least one working component configured to solve optimization tasks, the at least one working component comprising a first working component and a second working component, (D1, abstract, figure 1, 0013-0019, 0023-0024 shows/discloses optimization platform having plurality of working machines to process/solve optimization requests by utilizing one or more optimization models). and at least one storage component separate from messaging component, (D1, abstract, figure 1, 0013-0019 shows/discloses optimization platform having platform database 130 which is separate from Queue 135 which stores work requests.). and wherein the components in the at least one optimization service are operatively connected to each other, the method comprising the following steps, (D1, abstract, figure 1, 0013-0019 shows/discloses optimization platform having various interconnected with each other to process optimization requests.). receiving, by the managing component, an optimization request submitted from the client comprising an optimization task and corresponding data for optimization, (D1, abstract, figure 1-2, 0013-0019, 0040-0045 shows/discloses optimization platform receiving, by the managing component via API, an optimization request submitted from the client comprising an optimization task and corresponding data for optimization (e.g., conditions, values, metadata, hyperparameters etc.).). storing, by the managing component, the corresponding data for optimization, (D1, abstract, figure 1-2, 0013-0019, 0040-0046 shows/discloses optimization platform receiving, an optimization task and corresponding data for optimization (e.g., conditions, values, metadata, hyperparameters etc.), and segmenting the data and thereafter storing the data.). and a created associated identifier of the optimization task in the [work queue], (D1, abstract, figure 1-2, 0013-0020, 0040-0046 shows/discloses optimization platform receiving, an optimization task and corresponding data for optimization (e.g., conditions, values, metadata, hyperparameters etc.), and segmenting the data and thereafter storing the data, where segmented data is associated with created metadata (e.g., construed as “created identifier”).). sending, by the managing component, the optimization task and the associated identifier of the optimization task to the messaging component, (D1, abstract, figure 1-2, 0013-0020, 0040-0046 shows/discloses optimization platform/work queue 135 receiving, via API an optimization task and corresponding data for optimization (e.g., conditions, values, metadata, hyperparameters etc.), including segmenting the data and thereafter storing the data, where the segmented data is associated with created metadata (e.g., construed as “created identifier”). D1, further discloses distributing the segmented data/metadata to various worker machines to be processed.). obtaining, by the at least one working component, the stored corresponding data for optimization from the [work queue], (D1, abstract, figure 1-2, 0013-0019, 0040-0046 shows/discloses optimization platform/work queue 135 receiving, via API an optimization task and corresponding data for optimization (e.g., conditions, values, metadata, hyperparameters etc.), and segmenting the data and thereafter storing the data, where the segmented data is associated with created metadata. D1, further discloses distributing the data from storage to various worker machines to be processed.). creating, by the at least one working component, an optimization model to solve the optimization task, solving, by the at least one working component, the optimization task based on the created optimization model (D1, abstract, figure 1-2, 0013-0020, 0040-0046, 0053, 0068-0070 shows/discloses optimization platform/work queue 135 receiving, via API an optimization task and corresponding data for optimization (e.g., conditions, values, metadata, hyperparameters etc.), and segmenting the data and thereafter storing the data, where the segmented data is associated with created metadata. D1 further discloses dynamically selecting and/or tuning sources of ensemble of optimization models to process the optimization request and generating solutions/suggestions.). and storing, by the at least one working component, the solution to the optimization task and the associated identifier of the optimization task in the at least one storage component (D1, abstract, figure 1-2, 0013-0020, 0040-0046, 0053, 0068-0070 shows/discloses dynamically selecting and/or tuning sources of ensemble of optimization models to process the optimization request and generating solutions/suggestions, and thereafter storing all the computed/generated values/solutions to the platform database.). As noted above, D1 discloses a created associated identifier of the optimization task and the sending of the associated identifier, and arguably/inherently discloses [obtaining data] through the associated identifier of the optimization task [in order for worker components to retrieve and process tasks]; nevertheless, for the sake of completeness, D2 (fig. 2a, 0036) explicitly discloses [obtaining/referring to job data] through the associated identifier of [the job]. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the invention, as disclosed in D1, to include the teachings of D2. This would have been obvious with predicable results of assigning unique IDs to jobs/tasks and obtaining or referring to job data via the unique job ID as disclosed D2. D1 discloses work machines pulling optimization tasks from queue, however, D1 fails to expressly disclose - monitoring, by the at least one working component, the messaging component for received… tasks, at detection, by the at least one working component, of a received… task: D3 (0023, 0040) monitoring, by the at least one working component, the messaging component for received… tasks, at detection, by the at least one working component, of a received… task: Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the invention, as disclosed in D1, to include the teachings of D3 as noted above. This would have been obvious with predicable results of detecting tasks and performing the tasks accordingly as disclosed by D3. As noted above, D1 discloses storage component separate (e.g., database) from the messaging component (work queue), where request data is stored/retrieved from messaging component (work queue); however, D1 fails to expressly disclose [the design choice of] storing request data in the at least one storage component separate [from message component/work queue]. However, the above concepts/design choices are well known and are for instance disclosed by D4 (col. 5, lines 15-26) discloses the design choice of storing/retrieving request data in the at least one storage component (e.g., local database) separate from message component/work queue (e.g., server). Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the invention, as disclosed in D1, to include the teachings of D4 as noted above. This would have been obvious with predicable results of keeping records of requests in first database and forwarding the request to servers/queue for further processing/storage as disclosed D4. D1 discloses optimization request as well as identifier associated with optimization task; however, D1 fails to expressly disclose - acknowledging receipt of the… request to the client, sending the associated identifier of the… task/request to the client. However, D5 (0045-0050) discloses known/fundamental send/ack communication protocols in a client server architecture including acknowledging receipt of the… request to the client, sending the associated identifier of the… task/request to the client. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the invention, as disclosed in D1, to include the teachings of D5 as noted above. This would have been obvious with predicable results of processing/acknowledging client requests as known the art and disclosed by D5. D1 fails to expressly disclose - the first and second working components being identical instances of a stateless working component. However, D6 (fig. 6b, 0094-0095) discloses plurality of identical VMs are stateless which reads on the first and second working components (e.g. VMs) being identical instances of a stateless working component. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the invention, as disclosed in D1, to include the teachings of D6 as noted above. This would have been obvious for the purpose of redundancy as, if one VM in a component (or the application code thereof) fails, other VMs in that component can seamlessly take over whilst the failure is corrected as disclosed by D6. As per claim 2, the rejection of claim 1 further incorporated, D1 discloses, further comprising the following steps: determining, by the managing component, meta data based at least on the submitted optimization request, wherein the meta data comprises submission data of the optimization request, and storing, by the managing component, the determined meta data in the [work queue], wherein the [work queue] is accessible to both the managing component and the at least one working component, (D1, abstract, figure 1-2, 0013-0020, 0040-0046 shows/discloses optimization platform receiving, an optimization task and corresponding data for optimization (e.g., conditions, values, metadata, hyperparameters etc.), and segmenting the data and thereafter storing the data, where segmented data is associated with created metadata, and D1, further discloses distributing the data/meta data from storage to various worker machines to be processed.). As noted above, D1 discloses storage component separate (e.g., database) from the messaging component (work queue), where request data is stored/retrieved from messaging component (work queue); however, D1 fails to expressly disclose [the design choice of] storing request data in the at least one storage component separate [from message component/work queue]. However, the above concepts/design choices are well known and are for instance disclosed by D4 (col. 5, lines 15-26) discloses the design choice of storing/retrieving request data in the at least one storage component (e.g., local database) separate from message component/work queue (e.g., server). Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the invention, as disclosed in D1, to include the teachings of D4 as noted above. This would have been obvious with predicable results of keeping records of requests in first database and forwarding the request to servers/queue for further processing/storage as disclosed D4. As per claim 3, the rejection of claim 1 further incorporated, D1 discloses, further comprising the following steps: determining, by the managing component, payload based on the submitted optimization request, wherein the payload comprises the corresponding data for optimization, and storing, by the managing component, the determined payload in the [work queue], wherein the [work queue] is accessible to both the managing component and the at least one working component, (D1, abstract, figure 1-2, 0013-0020, 0040-0047 shows/discloses optimization platform receiving, an optimization task and corresponding data for optimization (e.g., conditions, values, metadata, hyperparameters etc.), and segmenting the data and thereafter storing the data, where segmented data is associated with created metadata, and D1, further discloses distributing the data/meta data from storage to various worker machines to be processed.). As noted above, D1 discloses storage component separate (e.g., database) from the messaging component (work queue), where request data is stored/retrieved from messaging component (work queue); however, D1 fails to expressly disclose [the design choice of] storing request data in the at least one storage component separate [from message component/work queue]. However, the above concepts/design choices are well known and are for instance disclosed by D4 (col. 5, lines 15-26) discloses the design choice of storing/retrieving request data in the at least one storage component (e.g., local database) separate from message component/work queue (e.g., server). Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the invention, as disclosed in D1, to include the teachings of D4 as noted above. This would have been obvious with predicable results of keeping records of requests in first database and forwarding the request to servers/queue for further processing/storage as disclosed D4. As per claim 4, the rejection of claim 1 further incorporated, D1 discloses, wherein the [work queue] comprises a database wherein the database is configured to store meta data or meta data and payload of the optimization request, (D1, abstract, figure 1-2, 0013-0020, 0040-0047 shows/discloses optimization platform receiving, an optimization task and corresponding data for optimization (e.g., conditions, values, metadata, hyperparameters etc.), and segmenting the data and thereafter storing the data in platform database (work queue), where segmented data is associated with created metadata, and D1, further discloses distributing the data/meta data from work queue to various worker machines to be processed.). As noted above, D1 discloses storage component separate (e.g., database) from the messaging component (work queue), where request data is stored/retrieved from messaging component (work queue); however, D1 fails to expressly disclose [the design choice of] storing request data in the at least one storage component separate [from message component/work queue]. However, the above concepts/design choices are well known and are for instance disclosed by D4 (col. 5, lines 15-26) discloses the design choice of storing/retrieving request data in the at least one storage component (e.g., local database) separate from message component/work queue (e.g., server). Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the invention, as disclosed in D1, to include the teachings of D4 as noted above. This would have been obvious with predicable results of keeping records of requests in first database and forwarding the request to servers/queue for further processing/storage as disclosed D4. As per claim 5, the rejection of claim 4 further incorporated, D1 discloses, wherein the [work queue] comprises [work queue] including the database and an object storage, respectively, and wherein the object storage is configured to store the payload of the optimization request, (D1, abstract, figure 1-2, 0013-0020, 0034, 0040-0047 shows/discloses optimization platform having database/work queue being able to store various data including data/payload of request for optimization.). D1 discloses database; however, fails to expressly disclose- comprises two storage components including an object storage. D3 (0033) two storage components/databases for storing content/data, and/or tasks. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the invention, as disclosed in D1, to include the teachings of D3 as noted above. This would have been obvious with predicable results of detecting tasks and performing the tasks accordingly as disclosed by D3. As per claim 6, the rejection of claim 2 further incorporated, D1 discloses, wherein the meta data comprises data for annotating the optimization request., (D1, abstract, figure 1-2, 0013-0020, 0040-0047 shows/discloses optimization platform receiving, an optimization task and corresponding data for optimization (e.g., conditions, values, metadata, hyperparameters etc.), and segmenting the data and thereafter storing the data in platform database/storage, where segmented data is associated/annotated with created metadata, and D1, further discloses distributing the data/meta data from storage to various worker machines to be processed.). As per claim 7, the rejection of claim 3 further incorporated, D1 discloses, wherein the payload comprises data for optimization corresponding to the optimization request, (D1, abstract, figure 1-2, 0013-0020, 0040-0047 shows/discloses optimization platform receiving, an optimization task and corresponding data for optimization (e.g., conditions, values, metadata, hyperparameters etc.), and segmenting the data and thereafter storing the data in platform database/storage, where segmented data is associated/annotated with created metadata, and D1, further discloses distributing the data/meta data from storage to various worker machines to be processed.). As pe claim 8, the rejection of claim 1 further incorporated, D1 discloses, wherein the architecture comprises a decision support system and/or an optimization system, each system utilizing a cloud infrastructure, (D1, abstract, figure 1-2, 0013-0020, 0040-0047 shows/discloses optimization platform in client/server architecture (cloud).). As per claim 9, the rejection of claim 1 further incorporated, D1 discloses, wherein the at least one working component is a stateless component, (D1, abstract, figure 1-2, 0013-0020, 0040-0047 shows/discloses optimization platform in client/server architecture (cloud) is relies on stateless REST protocol. Additionally, see D6 (fig. 6b, 0094-0095) discloses plurality of identical VMs are stateless which reads on the first and second working components (e.g. VMs) being identical instances of a stateless working component and limitations above.). As per claim 10, the rejection of claim 1 further incorporated, D1 discloses, wherein the client is a piece of computer hardware or software configured to access the optimization service, (D1, abstract, figure 1-2, 0013-0020, 0040-0047 shows/discloses optimization platform in client/server architecture (cloud) is relies on stateless REST protocol, and allow clients (e.g. hardware/software) to access optimization platform.). As per claims 11-12: Claims 11-12 are medium and system corresponding to method claim 1 and are of substantially same scope. Accordingly, claims 11-12 are rejected under the same rational as set forth for claim 1. Response to Arguments Applicant’s arguments filed on 12/17/2025 have been fully considered but they are not persuasive and/or moot in view of new/modified grounds of rejections. Conclusion Applicant's amendment necessitated any new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. HYPERVISOR ASSISTED APPLICATION VIRTUALIZATION DOCUMENT ID US 20210019164 A1 DATE PUBLISHED 2021-01-21 Abstract A virtualized application runs on top of a guest operating system (OS) of a virtual machine and is supported by a file system of the guest OS. The method of supporting the virtualized application with the file system includes provisioning a first virtual disk as a data store of the file system and a second virtual disk for the virtualized application, wherein the first and second virtual disks store first and second files of the virtualized application, respectively, retrieving metadata of the virtualized application, updating a master file table of the file system according to the retrieved metadata to map the first files to logical blocks of the file system, updating the master file table to map the second files to additional logical blocks according to the retrieved metadata, and creating a mapping for the additional logical blocks, that is used during an input/output operation, according to the retrieved metadata See form 892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MUSTAFA A AMIN whose telephone number is (571)270-3181. The examiner can normally be reached on Monday-Friday from 8:00 AM to 5:00 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kevin Young, can be reached on 571-270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /MUSTAFA A AMIN/Primary Examiner, Art Unit 2194
Read full office action

Prosecution Timeline

Mar 14, 2022
Application Filed
Jul 28, 2024
Non-Final Rejection — §103
Oct 22, 2024
Response Filed
Nov 07, 2024
Final Rejection — §103
Jan 06, 2025
Response after Non-Final Action
Feb 05, 2025
Request for Continued Examination
Feb 09, 2025
Response after Non-Final Action
Sep 18, 2025
Non-Final Rejection — §103
Dec 17, 2025
Response Filed
Jan 27, 2026
Final Rejection — §103
Mar 23, 2026
Interview Requested
Mar 31, 2026
Applicant Interview (Telephonic)
Mar 31, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561517
AUTOMATIC FILLING OF A FORM WITH FORMATTED TEXT
2y 5m to grant Granted Feb 24, 2026
Patent 12554765
AUDIO PLAYING METHOD, APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 12536368
SYSTEMS AND METHODS FOR PERSISTENT INHERITANCE OF ARBITRARY DOCUMENT CONTENT
2y 5m to grant Granted Jan 27, 2026
Patent 12524260
MEASUREMENTS OF VIRTUAL MACHINES
2y 5m to grant Granted Jan 13, 2026
Patent 12511166
FLOW MANAGEMENT WITH SERVICES
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
63%
Grant Probability
93%
With Interview (+29.4%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 443 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month