Prosecution Insights
Last updated: April 19, 2026
Application No. 18/462,623

CLUSTER, CLUSTER MANAGEMENT METHOD, AND CLUSTER MANAGEMENT PROGRAM

Non-Final OA §102§103
Filed
Sep 07, 2023
Examiner
DAO, TUAN C.
Art Unit
2198
Tech Center
2100 — Computer Architecture & Software
Assignee
Hitachi, Ltd.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
98%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
642 granted / 782 resolved
+27.1% vs TC avg
Strong +16% interview lift
Without
With
+15.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
38 currently pending
Career history
820
Total Applications
across all art units

Statute-Specific Performance

§101
18.3%
-21.7% vs TC avg
§103
51.8%
+11.8% vs TC avg
§102
18.6%
-21.4% vs TC avg
§112
5.3%
-34.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 782 resolved cases

Office Action

§102 §103
DETAILED ACTION The instant application having Application No. 18/462623 filed on 09/07/2023 is presented for examination by the examiner. Claim 1-12 is/are pending in the application. Claims 1 and 11-12 is/are independent claims. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Examiner Notes Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Priority As required by M.P.E.P. 201.14(c), acknowledgement is made of applicant’s claim for priority based on applications filed on 02/01/2023. Drawings The applicant’s drawings submitted are acceptable for examination purposes. Information Disclosure Statement As required by M.P.E.P. 609, the applicant’s submissions of the Information Disclosure Statement dated 09/07/2023 is acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. Allowable Subject Matter Claims 3-5, and 7-10 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The prior art of record does not disclose and/or fairly suggest at least claimed limitations recited in such manners in dependent claims 3-5, and 7-10. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-2 and 11-12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 2012/0254443 to Ueda. As per claim 1, Ueda discloses a cluster management method in a cluster including a plurality of nodes (FIGs. 1-3; abst, paragraphs 0023-0025 and 0042: “a web server group 120 that is assigned traffic by the load balancer 110 and processes requests sent from the end users' client terminals 180 over the Internet 102; and a Sorry server 124 that responds to requests on behalf of the web server group 120 when the web server group 120 is overloaded.” [Wingdings font/0xE0] server group 120 (one cluster as claimed) including web servers 122a-z and an alternate server group 124 (substitute cluster as claimed) implemented as part of functions provided by any of the web servers 122 (paragraphs 0025)) each including a storage unit that stores a program to be executed in response to a request from a client terminal connected to a network (FIGs. 1-3; paragraphs 0023-0025 and 0042: “a web server group 120 that is assigned traffic by the load balancer 110 and processes requests sent from the end users' client terminals 180 over the Internet 102; and a Sorry server 124 that responds to requests on behalf of the web server group 120 when the web server group 120 is overloaded.” [Wingdings font/0xE0] server group 120 including web servers 122a-z (one cluster as claimed) and alternate server group 124 (substitute cluster as claimed) implemented as part of functions provided by any of the web servers 122 (paragraphs 0025)) and a processor that executes the program, the cluster management method of causing the processor to: in a case where at least one substitute program stored in a substitute cluster is executed as a substitute for at least one target program stored in the cluster (FIGs. 1-3; paragraphs 0023-00234, 0041-0042 and 0045: “The load balancer 110, according to the settings enforced by the load distribution setting unit 154, assigns requests issued via the Internet 102 among the instances 122 in the web server group 120 and monitors satisfaction of the transfer condition. If an overload state of the web system 104 is detected, the load balancer 110 transfers requests to the Sorry server 124. The Sorry server 124 is a web server that, when the web server group 120 is overloaded, responds to the transferred requests on behalf of the web server group 120 by returning a busy message to users. The Sorry server 124 is also a server that can be regarded as having a substantially infinite processing capability with respect to the processing of responding on behalf of the target server.” [Wingdings font/0xE0] alternate server group 124 (substitute cluster as claimed) implemented as part of functions provided by any of the web servers 122 (paragraphs 0025)), when a request to the target program transmitted from the client terminal is acquired (FIGs. 1-3; paragraphs 0023-00234, 0041-0042 and 0045: “The load balancer 110, according to the settings enforced by the load distribution setting unit 154, assigns requests issued via the Internet 102 among the instances 122 in the web server group 120 and monitors satisfaction of the transfer condition. If an overload state of the web system 104 is detected, the load balancer 110 transfers requests to the Sorry server 124. The Sorry server 124 is a web server that, when the web server group 120 is overloaded, responds to the transferred requests on behalf of the web server group 120 by returning a busy message to users. The Sorry server 124 is also a server that can be regarded as having a substantially infinite processing capability with respect to the processing of responding on behalf of the target server.” [Wingdings font/0xE0] the user request from the user sent to one of the web server 122 yet they are over loaded), execute a request transfer process of transferring the request to the target program to the substitute cluster such that the substitute program is executed in response to the request to the target program (FIGs. 1-3; paragraphs 0023-00234, 0041-0042 and 0045: “The load balancer 110, according to the settings enforced by the load distribution setting unit 154, assigns requests issued via the Internet 102 among the instances 122 in the web server group 120 and monitors satisfaction of the transfer condition. If an overload state of the web system 104 is detected, the load balancer 110 transfers requests to the Sorry server 124. The Sorry server 124 is a web server that, when the web server group 120 is overloaded, responds to the transferred requests on behalf of the web server group 120 by returning a busy message to users. The Sorry server 124 is also a server that can be regarded as having a substantially infinite processing capability with respect to the processing of responding on behalf of the target server.” [Wingdings font/0xE0] alternate server group 124 (substitute cluster as claimed) implemented as part of functions provided by any of the web servers 122 (paragraphs 0025) processes the user requests transferred from the load balancer while the weber server 122 overloaded). As per claim 2, Ueda discloses wherein the cluster has at least one worker pod (FIGs. 1-3; paragraphs 0023-0025 and 0042: “a web server group 120 that is assigned traffic by the load balancer 110 and processes requests sent from the end users' client terminals 180 over the Internet 102; and a Sorry server 124 that responds to requests on behalf of the web server group 120 when the web server group 120 is overloaded.” [Wingdings font/0xE0] server group 120 (one cluster as claimed) including web servers 122a-z (worker pods as claimed) and server group 124 (substitute cluster as claimed) implemented as part of functions provided by any of the web servers 122 (paragraphs 0025)), and in the request transfer process, requests to programs of all the worker pods are transferred to the substitute cluster as requests to the target program (FIGs. 1-3; paragraphs 0023-00234, 0041-0042 and 0045: “The load balancer 110, according to the settings enforced by the load distribution setting unit 154, assigns requests issued via the Internet 102 among the instances 122 in the web server group 120 and monitors satisfaction of the transfer condition. If an overload state of the web system 104 is detected, the load balancer 110 transfers requests to the Sorry server 124. The Sorry server 124 is a web server that, when the web server group 120 is overloaded, responds to the transferred requests on behalf of the web server group 120 by returning a busy message to users. The Sorry server 124 is also a server that can be regarded as having a substantially infinite processing capability with respect to the processing of responding on behalf of the target server.” [Wingdings font/0xE0] server group 124 (substitute cluster as claimed) implemented as part of functions provided by any of the web servers 122 (paragraphs 0025) processes the user requests transferred from the load balancer while the weber server 122 overloaded). As per claim 11, it is cluster claim, which recite(s) the same limitations as those of claim 1. Accordingly, claim 11 is rejected for the same reasons as set forth in the rejection of claim 1. As per claim 12, it is cluster management program claim, which recite(s) the same limitations as those of claim 1. Accordingly, claim 12 is rejected for the same reasons as set forth in the rejection of claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Ueda in further view of US 2005/0021751 to Block et al. (hereafter “Block”). As per claim 6, Ueda does not explicitly disclose wherein the request transfer process is executed by a transfer pod provided in the cluster. Block further discloses wherein the request transfer process is executed by a transfer pod provided in the cluster (FIG. 1-2; paragraph 0030: “A cluster data port consistent with the invention principally supports the ability to selectively and dynamically choose among a plurality of connection paths 22 between source node 12 and any of nodes 14, 16, as well as the ability to selectively and dynamically switch over data flow from primary target node 14 to a backup primary node 16, effectively substituting the backup target node 16 as the new primary target node.” [Wingdings font/0xE0] cluster data ports transfer the request to substitute nodes in the cluster). It would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to combine a teaching of Block into Hao’s teaching because it would provide for the purpose of the ability to selectively and dynamically switch over data flow from primary target node 14 to a backup primary node 16, effectively substituting the backup target node 16 as the new primary target node (Block, paragraph 0030). Conclusion The following prior art made of record and not relied upon is cited to establish the level of skill in the applicant’s art and those arts considered reasonably pertinent to applicant’s disclosure. See MPEP 707.05(c). Prior arts: US 2023/0075894 to Walters For example, a computer network receives a computer domain request. System 100 may input the computer domain request into a first machine learning algorithm. The first machine learning algorithm may output a cluster subset comprising: “Computer Domain 1,” “Computer Domain 2,” and “Computer Domain 3.” The cluster subset generated by the first machine learning model (e.g., the first cluster subset) may be inputted into a second machine learning model. The second machine learning model may output nodes for each cluster of the first cluster subset. In some embodiments, the second machine learning model may generate a second cluster subset. For example, the second machine learning model may generate the following nodes for Computer Domain 1: “physical network 1,” “file server 1,” and “database server 1.” The system may determine substitute nodes for the outputs of the second machine learning model. For example, the system may determine that physical network 1 has the following substitutes: “virtual network 1,” “virtual machine 1,” and “hypervisor 1.” The system may generate a plurality of new clusters where the substitute nodes have replaced some of the original nodes. For example, in one cluster the system may replace physical network 1, an original node outputted by the second machine learning model, with virtual network 1. US 2016/0292037 to Kandukuri For example, compute cluster 160 may include a failover mechanism that quickly detects a compute node failure and substitutes another compute node to replace the failed compute node. As used herein, the term “restored compute node” refers to either a replacement compute node or the original compute node after repair. In one embodiment, one or more steps for data recovery on a restored compute node depends on the completion of node recovery, as shall be described in greater detail hereafter. US 2016/0191310 to Brandwine a manager module associated with a source computing node may select a particular alternative intermediate destination computing node from a defined pool to use for one or more particular communications from the source computing node to an indicated final destination, such as based on a configured logical network topology for the managed computer network and/or on one or more other selection criteria (e.g., to enable load balancing between the alternative computing nodes). The manager module then forwards those communications to the selected intermediate destination computing node for further handling. US 2013/0067188 to Mehra Additionally, upon detecting a substitution event (e.g., a failure, a disconnected, or a load-balancing determination) involving a node 208 of the cluster 206 selected as a cluster resource owner and/or cluster resource writer, the storage device drivers 306 may initiate a selection of a substitute cluster resource owner and/or a substitute cluster resource writer of the cluster resource. This mechanism may be particularly utilized to regulate access to the storage pool configuration 602 of a storage pool 116 through the selection of a cluster resource writer (e.g., storage pool manager 606) having exclusive write access to the cluster resource (e.g., the storage pool configuration 602), while granting read access to the other nodes 208 of the cluster 206. US 2008/0031601 to Hashimoto (i) mounting thereon an external recording medium in which a plurality of data groups and an application program that refers to each of the data groups are stored and (ii) playing back each of the data groups by executing the application program. The playback apparatus comprises: a control unit operable to control the execution of the application program; an obtaining unit operable to obtain, from an external server including therein one or more alternative data groups, an alternative data group to replace part of the data groups; and a storage unit that stores therein correspondence information showing a correspondence between a storage location where on an internal recording medium the alternative data group is stored and a storage location where on the external recording medium the part of the data groups to be replaced is stored. US 2001/0014097 to Beck More particularly, the cluster is provided with a skinny stack application for selecting a processor node, to which a connection will be established, after consideration has been given to the TCP port numbers that the processor node is listening for. Further, the cluster is provided with a method for tunneling data packets between processor nodes of the cluster such that the data packets do not have to be re-transmitted across a network. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tuan Dao whose telephone number is (571) 270 3387. The examiner can normally be reached on Monday to Friday from 09am to 05pm. The examiner can also be reached on alternate Fridays. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Vital, can be reached at telephone number (571) 272 4215. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /TUAN C DAO/ Primary Examiner, Art Unit 2198
Read full office action

Prosecution Timeline

Sep 07, 2023
Application Filed
Jan 09, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602257
ELECTRONIC DEVICE AND OPERATING METHOD WITH MODEL CO-LOCATION
2y 5m to grant Granted Apr 14, 2026
Patent 12566648
METHOD OF PROCESSING AGREEMENT TASK
2y 5m to grant Granted Mar 03, 2026
Patent 12566627
PREDICTING THE NEXT BEST COMPRESSOR IN A STREAM DATA PLATFORM
2y 5m to grant Granted Mar 03, 2026
Patent 12561173
METHOD FOR DATA PROCESSING AND APPARATUS, AND ELECTRONIC DEVICE
2y 5m to grant Granted Feb 24, 2026
Patent 12561591
CLASSIFICATION AND TRANSFORMATION OF SEQUENTIAL EVENT DATA
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
98%
With Interview (+15.6%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 782 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month