Prosecution Insights
Last updated: April 19, 2026
Application No. 18/509,027

MECHANISM TO MIGRATE THREADS ACROSS OPERATING SYSTEMS IN A DISTRIBUTED MEMORY SYSTEM

Non-Final OA §102§103
Filed
Nov 14, 2023
Examiner
KIM, SISLEY NAHYUN
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
590 granted / 665 resolved
+33.7% vs TC avg
Strong +17% interview lift
Without
With
+16.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
42 currently pending
Career history
707
Total Applications
across all art units

Statute-Specific Performance

§101
9.1%
-30.9% vs TC avg
§103
49.6%
+9.6% vs TC avg
§102
26.1%
-13.9% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 665 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless - (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 2, 4, 7-11, 13, 14, 18, and 19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Woodward et al. (US 2019/0272172, hereinafter Woodward). Regarding claim 1, Woodward discloses A system comprising (fig. 1-7): at least a first node of a first computer system and a second node of a second computer system, the first node comprising a first plurality of cores running a first operating system (paragraph [0021]: In an embodiment a method is provided for migrating an application from a source computing environment having a source Operating System (OS) to a target computing environment, the target computing environment having a target OS; paragraph [0113]: The electronic device 752 typically includes a processor 754, such as a Central Processing Unit (CPU), and may further include specialized processors such as a Graphics Processing Unit (GPU) or other such processor), a first thread proxy (paragraph [0109]: the source runtime 662, the surrogate system process 630), and a first thread daemon (paragraph [0109]: source agent 605), the second node comprising a second plurality of cores running a second operating system (paragraph [0021]: In an embodiment a method is provided for migrating an application from a source computing environment having a source Operating System (OS) to a target computing environment, the target computing environment having a target OS; paragraph [0113]: The electronic device 752 typically includes a processor 754, such as a Central Processing Unit (CPU), and may further include specialized processors such as a Graphics Processing Unit (GPU) or other such processor), a second thread proxy (paragraph [0088]: the virtual migration sandbox 260), and a second thread daemon (paragraph [0109]: migration agent 610 and/or administrative computing environment 275), wherein the first operating system of the first node is a different instance than the second operating system of the second node (paragraph [0041]: In some embodiments, the source computing environment and the target environment may share a same hardware processor, though the source computing environment and the target computing environment may comprise separate instances of different operating systems, or different versions of a same operating system … The source computing environment and the target computing environment may each comprise different instances of a same operating system (typically different versions of that operating system)), wherein the first thread daemon of the first node is configured to send a request to the second thread daemon of the second node in response to a request, from a process on the first node (paragraph [0109]: During execution of the source application 105, the source runtime 662 and the surrogate system process 630 report captured in-process and out-of-process calls to the source agent 605. The source agent 605 transmits the captured in-process and out-of-process calls to the migration agent 610), to create a thread on a core of the second plurality of cores of the second node (paragraph [0045]: an application 105 may need access to a number of resources provided by the source OS 102 … Some of the objects are out-of-process components which will run in their own threads of execution; paragraph [0093]: an application 205 is instantiated within the virtual migration sandbox 260; paragraph [0113]: The electronic device 752 typically includes a processor 754, such as a Central Processing Unit (CPU), and may further include specialized processors such as a Graphics Processing Unit (GPU) or other such processor), wherein the second thread daemon on the second node (paragraph [0109]: migration agent 610 and/or administrative computing environment 275) is configured to create a proxy process (paragraph [0088]: The preparation step 308 also includes instantiation of the virtual migration sandbox 260 within the target OS 202 of the migration computing environment 200. The virtual migration sandbox 260 may be created by the administrative console 270) and instantiate the thread in the proxy process in response to the request (paragraph [0045]: an application 105 may need access to a number of resources provided by the source OS 102 … Some of the objects are out-of-process components which will run in their own threads of execution; paragraph [0093]: an application 205 is instantiated within the virtual migration sandbox 260). Regarding claim 2, Woodward discloses wherein the second thread daemon is further configured to create system information that points to the proxy process (paragraph [0061]: the migration registry 255 is also modified during the migration setup step to re-direct out-of-process calls from the native system process 230 to the migration system process 265; paragraph [0088]: The preparation step 308 also includes instantiation of the virtual migration sandbox 260 within the target OS 202 of the migration computing environment 200. The virtual migration sandbox 260 may be created by the administrative console 270, 272). Regarding claim 4, Woodward discloses wherein the second thread daemon is further configured to send return code to the first thread daemon (paragraph [0073]: The source agent 405 and migration agent 410 may establish a compressed encrypted data pipe … and forward calls and responses; paragraph [0109]: During execution of the source application 105, the source runtime 662 and the surrogate system process 630 report captured in‐process and out‐of‐process calls to the source agent 605. The source agent 605 transmits the captured in‐process and out‐of‐process calls to the migration agent 610; Note: Under a broad but reasonable interpretation, a “return code” is simply the status or outcome of a migrated execution call. In Woodward, the migration agent 410 (i.e., second thread daemon”) forwards call results to the source agent 405 (i.e., first thread daemon) over the established data pipe). Regarding claims 7 and 18, Woodward discloses wherein the first thread daemon is configured to locally instantiate a thread in response to a request, from the process on the first node, to run the thread on a core of the first plurality of cores of the first node (paragraph [0045]: Some of the objects are out-of-process components which will run in their own threads of execution; paragraph [0109]: “During execution of the source application 105, the source runtime 662 and the surrogate system process 630 report captured in-process and out-of-process calls to the source agent 605). Regarding claims 8 and 19, Woodward discloses wherein the second thread daemon is configured to locally instantiate a thread in response to a request, from a process on the second node, to run the thread on a core of the second plurality of cores of the second node (paragraph [0045]: an application 105 may need access to a number of resources provided by the source OS 102 … Some of the objects are out-of-process components which will run in their own threads of execution; paragraph [0093]: an application 205 is instantiated within the virtual migration sandbox 260; paragraph [0113]: The electronic device 752 typically includes a processor 754, such as a Central Processing Unit (CPU), and may further include specialized processors such as a Graphics Processing Unit (GPU) or other such processor). Regarding claim 9, Woodward discloses wherein the first thread daemon is configured to gather information about the thread in response to the request from the process on the first node (paragraph [0045]: Some of the objects are out-of-process components which will run in their own threads of execution; paragraph [0109]: During execution of the source application 105, the source runtime 662 and the surrogate system process 630 report captured in-process and out-of-process calls to the source agent 605). Regarding claim 10, Woodward discloses A method of (fig. 1-7) migrating threads across a first node comprising a first plurality of cores running a first operating system and a second node comprising a second plurality of cores running a second operating system that is a different instance than the first operating system (paragraph [0021]: In an embodiment a method is provided for migrating an application from a source computing environment having a source Operating System (OS) to a target computing environment, the target computing environment having a target OS; paragraph [0041]: In some embodiments, the source computing environment and the target environment may share a same hardware processor, though the source computing environment and the target computing environment may comprise separate instances of different operating systems, or different versions of a same operating system … The source computing environment and the target computing environment may each comprise different instances of a same operating system (typically different versions of that operating system; paragraph [0113]: The electronic device 752 typically includes a processor 754, such as a Central Processing Unit (CPU), and may further include specialized processors such as a Graphics Processing Unit (GPU) or other such processor), the method comprising: receiving, by a first thread daemon of the first node, a request from a process on the first node (paragraph [0109]: During execution of the source application 105, the source runtime 662 and the surrogate system process 630 report captured in-process and out-of-process calls to the source agent 605) to migrate a thread of the process to a core of the second plurality of cores on the second node (paragraph [0045]: an application 105 may need access to a number of resources provided by the source OS 102 … Some of the objects are out-of-process components which will run in their own threads of execution; paragraph [0093]: an application 205 is instantiated within the virtual migration sandbox 260; paragraph [0113]: The electronic device 752 typically includes a processor 754, such as a Central Processing Unit (CPU), and may further include specialized processors such as a Graphics Processing Unit (GPU) or other such processor); sending, by the first thread daemon of the first node, the request to a second thread daemon of the second node (paragraph [0109]: During execution of the source application 105, the source runtime 662 and the surrogate system process 630 report captured in-process and out-of-process calls to the source agent 605. The source agent 605 transmits the captured in-process and out-of-process calls to the migration agent 610); and instantiating, by a second thread proxy on the second node, the thread within a proxy process of the second node (paragraph [0045]: an application 105 may need access to a number of resources provided by the source OS 102 … Some of the objects are out-of-process components which will run in their own threads of execution; paragraph [0088]: The preparation step 308 also includes instantiation of the virtual migration sandbox 260 within the target OS 202 of the migration computing environment 200. The virtual migration sandbox 260 may be created by the administrative console 270; paragraph [0093]: an application 205 is instantiated within the virtual migration sandbox 260). Regarding claim 11, Woodward discloses further comprising creating, by the second thread daemon of the second node, the proxy process on the second node (paragraph [0045]: an application 105 may need access to a number of resources provided by the source OS 102 … Some of the objects are out-of-process components which will run in their own threads of execution; paragraph [0088]: The preparation step 308 also includes instantiation of the virtual migration sandbox 260 within the target OS 202 of the migration computing environment 200. The virtual migration sandbox 260 may be created by the administrative console 270). Regarding claim 13, Woodward discloses further comprising gathering, by the first thread daemon of the first node, information about the thread prior to the sending of the request (paragraph [0045]: Some of the objects are out-of-process components which will run in their own threads of execution; paragraph [0109]: During execution of the source application 105, the source runtime 662 and the surrogate system process 630 report captured in-process and out-of-process calls to the source agent 605). Regarding claim 14, Woodward discloses further comprising sending, by the second thread proxy on the second node, return code to the first thread daemon on the second node (paragraph [0073]: The source agent 405 and migration agent 410 may establish a compressed encrypted data pipe … and forward calls and responses; paragraph [0109]: During execution of the source application 105, the source runtime 662 and the surrogate system process 630 report captured in‐process and out‐of‐process calls to the source agent 605. The source agent 605 transmits the captured in‐process and out‐of‐process calls to the migration agent 610; Note: Under a broad but reasonable interpretation, a “return code” is simply the status or outcome of a migrated execution call. In Woodward, the migration agent 410 (i.e., second thread daemon”) forwards call results to the source agent 405 (i.e., first thread daemon) over the established data pipe). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made Claims 3 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Woodward et al. (US 2019/0272172, hereinafter Woodward) in view of Mahajan et al. (US 2020/0167177, hereinafter Mahajan). Regarding claims 3 and 12, Woodward does not disclose wherein the system comprises a /proc/cpuinfo file containing a node identifier field indicating that the first plurality of cores is on the first node and the second plurality of cores is on the second node. Mahajan discloses wherein the system comprises a /proc/cpuinfo file containing a node identifier field indicating that the first plurality of cores is on the first node and the second plurality of cores is on the second node (paragraph [0042]: At S820, the automatically determined resource requirements of the virtual machine may include Central Processing Unit (“CPU”) core requirements including a frequency value and a count value. For example, the migration platform might perform this by using a text parsing tool such as grep to determine the content of the file /proc/cpuinfo and thus ascertain CPU core requirements (e.g., frequency and count). At S830, the automatically determined resource requirements of the virtual machine may include disk requirements. For example, the migration platform might perform this by ssh-ing into it and using basic Linux commands and files such as lsblk, blkid and/or df -hT to determine disk requirements). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement Woodward’s migration system on Linux-based environments, to parse /proc/cpuinfo as taught by Mahajan to obtain CPU attributes needed for automated orchestration, including identifiers distinguishing cores across sockets/nodes, so that cores can be unambiguously associated with their respective nodes during cross-environment coordination. The motivation would have been to provide for the automatic migration of a virtual machine from one cloud computing provider to another in a fast, automatic, and accurate manner (Mahajan paragraph [0043). Claims 5, 6, and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Woodward et al. (US 2019/0272172, hereinafter Woodward) in view of Banerjee et al. (US 2020/0366604, hereinafter Banerjee). Regarding claim 5, Woodward discloses wherein the first thread daemon of the first node is configured to send a request to the second thread daemon of the second node in response to a request, from a process on the first node (paragraph [0109]: During execution of the source application 105, the source runtime 662 and the surrogate system process 630 report captured in-process and out-of-process calls to the source agent 605. The source agent 605 transmits the captured in-process and out-of-process calls to the migration agent 610), to create a thread on a core of the second plurality of cores of the second node (paragraph [0045]: an application 105 may need access to a number of resources provided by the source OS 102 … Some of the objects are out-of-process components which will run in their own threads of execution; paragraph [0093]: an application 205 is instantiated within the virtual migration sandbox 260; paragraph [0113]: The electronic device 752 typically includes a processor 754, such as a Central Processing Unit (CPU), and may further include specialized processors such as a Graphics Processing Unit (GPU) or other such processor), wherein the second thread daemon on the second node (paragraph [0109]: migration agent 610 and/or administrative computing environment 275) is configured to create a proxy process (paragraph [0088]: The preparation step 308 also includes instantiation of the virtual migration sandbox 260 within the target OS 202 of the migration computing environment 200. The virtual migration sandbox 260 may be created by the administrative console 270) and instantiate the thread in the proxy process in response to the request (paragraph [0093]: an application 205 is instantiated within the virtual migration sandbox 260). Woodward does not disclose wherein the second thread daemon of the second node is further configured to send a request to the first thread daemon of the first node in response to a request, from a process on the second node, to create a thread on a core of the first plurality of cores of the first node, and wherein the first thread daemon on the first node is configured to create a proxy process and instantiate the thread in the proxy process in response to the request. In other words, but Woodward does not explicitly teach the reverse sequence - i.e., the target-side daemon initiating a create-thread request back to the source-side daemon and the source-side daemon then creating a proxy process and instantiating the thread. Banerjee discloses migrating back to the original system (paragraph [0034]: the application migration may have been temporary, and the application migrated back to the original system after a short period of time; paragraph [0039]: if desired, once the original system is available, a kernel and application can be migrated back to the original system in a manner described herein; paragraph [0059]: Processing unit 206 may be a multi-core processor). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to Woodward’s bidirectional agent/sandbox architecture to support reverse-direction thread migration as taught by Banerjee - i.e., after migrating execution contexts from source to target, enable the same agents to carry migration requests in the opposite direction. Banerjee’s explicit teaching of returning a migrated application (and kernel) back to its original environment provides the motivation to invert Woodward’s messaging protocol: a target-side daemon would send a create-thread request to the source-side daemon, and the source-side daemon would then create the proxy process and instantiate the thread. Such reversal of a known, symmetric communication pattern is a routine design choice yielding predictable results and involves no more than ordinary engineering. Regarding claim 15, Woodward discloses receiving, by a first thread daemon of the first node, a request from a process on the first node (paragraph [0109]: During execution of the source application 105, the source runtime 662 and the surrogate system process 630 report captured in-process and out-of-process calls to the source agent 605) to migrate a thread of the process to a core of the second plurality of cores on the second node (paragraph [0045]: an application 105 may need access to a number of resources provided by the source OS 102 … Some of the objects are out-of-process components which will run in their own threads of execution; paragraph [0093]: an application 205 is instantiated within the virtual migration sandbox 260; paragraph [0113]: The electronic device 752 typically includes a processor 754, such as a Central Processing Unit (CPU), and may further include specialized processors such as a Graphics Processing Unit (GPU) or other such processor); sending, by the first thread daemon of the first node, the request to a second thread daemon of the second node (paragraph [0109]: During execution of the source application 105, the source runtime 662 and the surrogate system process 630 report captured in-process and out-of-process calls to the source agent 605. The source agent 605 transmits the captured in-process and out-of-process calls to the migration agent 610); and instantiating, by a second thread proxy on the second node, the thread within a proxy process of the second node (paragraph [0045]: an application 105 may need access to a number of resources provided by the source OS 102 … Some of the objects are out-of-process components which will run in their own threads of execution; paragraph [0088]: The preparation step 308 also includes instantiation of the virtual migration sandbox 260 within the target OS 202 of the migration computing environment 200. The virtual migration sandbox 260 may be created by the administrative console 270; paragraph [0093]: an application 205 is instantiated within the virtual migration sandbox 260). Woodward does not disclose further comprising: receiving, by the second thread daemon of the second node, a request from a process on the second node to migrate a thread of the process to a core of the first plurality of cores on the first node; sending, by the second thread daemon of the second node, the request to the first thread daemon of the first node; and instantiating, by a first thread proxy on the first node, the thread within a proxy process of the first node. In other words, but Woodward does not explicitly teach the reverse sequence - i.e., the target-side daemon initiating a create-thread request back to the source-side daemon and the source-side daemon then creating a proxy process and instantiating the thread. Banerjee discloses migrating back to the original system (paragraph [0034]: the application migration may have been temporary, and the application migrated back to the original system after a short period of time; paragraph [0039]: if desired, once the original system is available, a kernel and application can be migrated back to the original system in a manner described herein; paragraph [0059]: Processing unit 206 may be a multi-core processor). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to Woodward’s bidirectional agent/sandbox architecture to support reverse-direction thread migration as taught by Banerjee - i.e., after migrating execution contexts from source to target, enable the same agents to carry migration requests in the opposite direction. Banerjee’s explicit teaching of returning a migrated application (and kernel) back to its original environment provides the motivation to invert Woodward’s messaging protocol: a target-side daemon would send a create-thread request to the source-side daemon, and the source-side daemon would then create the proxy process and instantiate the thread. Such reversal of a known, symmetric communication pattern is a routine design choice yielding predictable results and involves no more than ordinary engineering. Regarding claims 6 and 17, Woodward discloses wherein the second thread daemon is further configured to send return code to the first thread daemon (paragraph [0073]: The source agent 405 and migration agent 410 may establish a compressed encrypted data pipe … and forward calls and responses; paragraph [0109]: During execution of the source application 105, the source runtime 662 and the surrogate system process 630 report captured in‐process and out‐of‐process calls to the source agent 605. The source agent 605 transmits the captured in‐process and out‐of‐process calls to the migration agent 610; Note: Under a broad but reasonable interpretation, a “return code” is simply the status or outcome of a migrated execution call. In Woodward, the migration agent 410 (i.e., second thread daemon”) forwards call results to the source agent 405 (i.e., first thread daemon) over the established data pipe). Woodward does not disclose wherein the first thread daemon is configured to send return code to the second thread daemon. Banerjee discloses migrating back to the original system (paragraph [0034]: the application migration may have been temporary, and the application migrated back to the original system after a short period of time; paragraph [0039]: if desired, once the original system is available, a kernel and application can be migrated back to the original system in a manner described herein; paragraph [0059]: Processing unit 206 may be a multi-core processor). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to Woodward’s bidirectional agent/sandbox architecture to support reverse-direction thread migration as taught by Banerjee - i.e., after migrating execution contexts from source to target, enable the same agents to carry migration requests in the opposite direction. Banerjee’s explicit teaching of returning a migrated application (and kernel) back to its original environment provides the motivation to invert Woodward’s messaging protocol: a target-side daemon would send a create-thread request to the source-side daemon, and the source-side daemon would then create the proxy process and instantiate the thread. Such reversal of a known, symmetric communication pattern is a routine design choice yielding predictable results and involves no more than ordinary engineering. Regarding claim 16, Woodward discloses further comprising creating, by the second thread daemon of the second node, the proxy process on the second node (paragraph [0045]: an application 105 may need access to a number of resources provided by the source OS 102 … Some of the objects are out-of-process components which will run in their own threads of execution; paragraph [0088]: The preparation step 308 also includes instantiation of the virtual migration sandbox 260 within the target OS 202 of the migration computing environment 200. The virtual migration sandbox 260 may be created by the administrative console 270). Woodward does not disclose further comprising creating, by the first thread daemon of the first node, the proxy process on the first node. Banerjee discloses migrating back to the original system (paragraph [0034]: the application migration may have been temporary, and the application migrated back to the original system after a short period of time; paragraph [0039]: if desired, once the original system is available, a kernel and application can be migrated back to the original system in a manner described herein; paragraph [0059]: Processing unit 206 may be a multi-core processor). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to Woodward’s bidirectional agent/sandbox architecture to support reverse-direction thread migration as taught by Banerjee - i.e., after migrating execution contexts from source to target, enable the same agents to carry migration requests in the opposite direction. Banerjee’s explicit teaching of returning a migrated application (and kernel) back to its original environment provides the motivation to invert Woodward’s messaging protocol: a target-side daemon would send a create-thread request to the source-side daemon, and the source-side daemon would then create the proxy process and instantiate the thread. Such reversal of a known, symmetric communication pattern is a routine design choice yielding predictable results and involves no more than ordinary engineering. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Moussaoui (US 2022/0357995) discloses “the Linux command «Iscpu», which retrieves CPU architecture information from sysfs and /proc/cpuinfo, may in some embodiments be used to retrieve via the resource allocation daemon a list of all the CPU nodes of the server on which the Pod is running” (paragraph [0137]). Moroo et al. (US 2015/0261566) discloses “it may be desirable to execute some processes which are continuously run, such as daemon, and which have been migrated to particular OS core(s) 2, at the timing when the processing wait time for a process is reduced, for example” (paragraph [0086]) and “When there is any process whose maximum wait time is equal to or smaller than a predetermined threshold, and when the process migrating unit 15 determines that there is any processes that have been migrated to different OS core(s) 2, the migrating cancelling unit 17 sets to allow the processes that have been migrated by the process migrating unit 15, to be executed on other OS cores 2” (paragraph [0095]). Any inquiry concerning this communication or earlier communications from the examiner should be directed to SISLEY N. KIM whose telephone number is (571)270-7832. The examiner can normally be reached M-F 11:30AM -7:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Y. Blair can be reached on (571)270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SISLEY N KIM/Primary Examiner, Art Unit 2196 02/23/2026
Read full office action

Prosecution Timeline

Nov 14, 2023
Application Filed
Feb 23, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602254
JOB NEGOTIATION FOR WORKFLOW AUTOMATION TASKS
2y 5m to grant Granted Apr 14, 2026
Patent 12602260
COMPUTER-BASED PROVISIONING OF CLOUD RESOURCES
2y 5m to grant Granted Apr 14, 2026
Patent 12591474
BATCH SCHEDULING FUNCTION CALLS OF A TRANSACTIONAL APPLICATION PROGRAMMING INTERFACE (API) PROTOCOL
2y 5m to grant Granted Mar 31, 2026
Patent 12585507
LOAD TESTING AND PERFORMANCE BENCHMARKING FOR LARGE LANGUAGE MODELS USING A CLOUD COMPUTING PLATFORM
2y 5m to grant Granted Mar 24, 2026
Patent 12578994
SYSTEMS AND METHODS FOR TRANSITIONING COMPUTING DEVICES BETWEEN OPERATING STATES
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+16.9%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 665 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month