Prosecution Insights
Last updated: April 18, 2026
Application No. 18/215,155

INFORMATION PROCESSING METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM

Non-Final OA §101§103§112
Filed
Jun 27, 2023
Examiner
LIN, HSING CHUN
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
Vivo Mobile Communication Co., Ltd.
OA Round
3 (Non-Final)
59%
Grant Probability
Moderate
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
64 granted / 108 resolved
+4.3% vs TC avg
Strong +80% interview lift
Without
With
+79.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
37 currently pending
Career history
145
Total Applications
across all art units

Statute-Specific Performance

§101
17.1%
-22.9% vs TC avg
§103
35.8%
-4.2% vs TC avg
§102
6.5%
-33.5% vs TC avg
§112
34.0%
-6.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 108 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending in this application. Response to Arguments Applicant’s arguments regarding the rejections of claims 1-20 under 35 U.S.C. 112b have been fully considered and some are persuasive. The rejections have been withdrawn. However, new 35 U.S.C. 112b rejections are applied to claims 1-20. Applicant's arguments regarding the 35 U.S.C. 101 rejections of claims 1-20 have been fully considered but they are not persuasive. Regarding the 35 U.S.C. 101 rejection, the applicant argues the following in the remarks: The claims do not recite a mental process. The claims provide an improvement so the abstract ideas are integrated into a practical application. Additional elements are not merely "data gathering" or "receiving or transmitting data" as characterized in the Office Action. Instead, they provide a specific way to use network tools (the ECS Option) to find and assign the best server for a task or a service. At the time of the invention, it was not a common or routine practice to use an ECS Option to carry computing requirements, nor was it conventional to pick a server by looking at a combination of its physical distance, routing distance, network delay, and current workload all at once. Examiner has thoroughly considered Applicant’s arguments, but respectfully finds them unpersuasive for at least the following reasons: As to point (a), the examiner respectfully disagrees. For example, determining a server query request is a mental process since humans can mentally create a request. Additionally, the other steps that Applicant recites are not mentally processes including the sending step, the obtaining step, and the instructing steps are actually insignificant extra solution activities. As to point (b), the examiner respectfully disagrees. The specification discloses an improvement, but the claims do not recite all the steps necessary to realize the improvement (see MPEP 2106.04(d)(1) if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. That is, the claim includes the components or steps of the invention that provide the improvement described in the specification.). As to point (c), the examiner respectfully disagrees. Narrowing the server query request as being comprised in a edns-client-subnet (ECS) option merely just links the judicial exception to a specific technological environment. It is not considered an insignificant extra solution activity that is directed to "data gathering" or "receiving or transmitting data". Applicant argues that it is not conventional to pick a server by looking at a combination of its physical distance, routing distance, network delay, and current workload all at once, but that isn’t even what the claims recite. Applicant's arguments regarding the 35 U.S.C. 103 rejections of claims 1-20 have been fully considered but they are not persuasive or moot in light of the references being applied in the current rejection. Regarding the 35 U.S.C. 103 rejection, the applicant argues the following in the remarks: Matthes and Moon fail to teach sending the server query request to the computing power server, the server query request being comprised in an edns-client-subnet (ECS) Option; and obtaining, from the computing power server, index information of a first server, …instructing, based on the index information, the first server to perform the computing power task for the second communication device or provide the service to the second communication device. Examiner has thoroughly considered Applicant’s arguments, but respectfully finds them unpersuasive for at least the following reasons: As to point (a), the examiner argues that Matthes in view of Moon teach sending the server query request to the computing power server; and obtaining, from the computing power server, index information of a first server, …instructing, based on the index information, the first server to perform the computing power task for the second communication device or provide the service to the second communication device and that the argument that Matthes in view of Moon do not teach the server query request being comprised in an edns-client-subnet (ECS) Option is moot in light of the references being applied. Moon teaches sending the server query request to the computing power server because it recites in [0041] “a user requests a virtual desktop service from the management server 110 through a communication network (S201)”, in [0042] “First, the management server 110 determines whether a usage pattern of a user is input to the database 114 (S202) to predict a server power usage amount in accordance with the input usage pattern (S203)”, and in [0039] “In addition, the virtualization server 120 includes an agent 124 for receiving a request from the scheduler 113 of the management server 110 to allocate the virtual machines. The agent 124 may deliver usage amount information of various resources (a central processor unit (CPU), a memory, a network, and a disk) of the virtualization server 120, and allocation states and power usage statues of the virtual machines 121 to the database 114 of the management server 110”. Moon teaches obtaining, from the computing power server, index information of a first server because it recites in [0057] “In this case, when a specific user requests a virtual desktop service for corresponding work, the management server 110 may select a virtualization server 120”. Moon teaches instructing, based on the index information, the first server to perform the computing power task for the second communication device or provide the service to the second communication device because it recites in [0015] “The virtual desktop service providing method may further include, when a virtual desktop service request is received from the user, predicting a server power usage amount in accordance with a usage pattern of the user, selecting a virtualization server based on the server power usage amount, and transmitting driving commands on virtual machines to the selected virtualization server”, [0058] “Referring to FIG. 3, the scheduler 113 of the management server 110 then selects the virtualization server 120, and the agent 124 of the virtualization server 120 receives a request from the scheduler 113 to allocate virtual machines to the virtualization server 120 (S301)”, [0059] “Then, the service provider 115 of the management server 110 may provide a virtual desktop service to the user through the virtual machines performed by the selected virtualization server 120 (S302)”, [0041] “a user requests a virtual desktop service from the management server 110 through a communication network”. The user is a second communication device since it communicates through a communication network and Figure 1 shows the user as a desktop computer that communicates through a network. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. As per claims 1, 8, and 15 (line numbers refer to claim 1): Lines 25-26 recite “a physical distance between the first server and a second communication device is the shortest” but it is unclear what this means (Is the physical distance between the first server and the second communication device the shortest compared to physical distances between other servers and the second communication device?). Lines 27-29 recite “a routing distance between the first server and the second communication device is the shortest or a delay between the first server and the second communication device is the smallest” but it is unclear what this means (Is the routing distance or delay between the first server and the second communication device the shortest or smallest compared to the routing distance or delay between other servers and the second communication device?). As per claims 7 and 14 (line numbers refer to claim 7): Line 2 recites “the fifth request” which lacks antecedent basis. Additionally, there isn’t a first, second, third, or fourth request, so it is unclear how there can be a fifth request. Claims 2-7, 9-14, and 16-20 are dependent claims of claim 1, 8, and 15, and fail to resolve the deficiencies of claims 1, 8, and 15, so they are rejected for the same reasons. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (abstract idea) without significantly more. As per claim 1, in step 1 of the 101 analysis, the examiner has determined that the claim is directed to a method. Therefore, the claim is directed to one of the four statutory categories of invention. In step 2A prong 1 of the 101 analysis, the examiner has determined that the claim recites a judicial exception. Specifically, the limitation “performing a first operation according to the description information of the computing power task or the description information of the service, wherein the first operation comprises: determining a server query request according to the computing power requirement information of the computing power task or the computing power requirement information of the service” is a mental process. Performing the first operation can involve determining a server query request and that can be considered a mental process since humans can mentally come up with a server query request. In step 2A prong 2 of the 101 analysis, the examiner has determined that the additional elements, alone or in combination do not integrate the judicial exceptions into a practical application for the following rationale: The limitations "obtaining description information of a computing power task or description information of a service", “performing a first operation according to the description information of the computing power task or the description information of the service, wherein the first operation comprises: obtaining computing power requirement information of the computing power task, or computing power requirement information of the service… the server query request being configured to request to query a computing power server; sending the server query request to the computing power server…obtaining index information of a first server”, and “instructing, based on the index information, the first server to perform the computing power task for the second communication device or provide the service to the second communication device” represent insignificant, extra-solution activities. The term "extra-solution activity" can be understood as "activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim" (MPEP 2106.05(g)). The examiner has determined that the limitation "obtaining description information of a computing power task or description information of a service", “performing a first operation according to the description information of the computing power task or the description information of the service, wherein the first operation comprises: obtaining computing power requirement information of the computing power task, or computing power requirement information of the service… the server query request being configured to request to query a computing power server; sending the server query request to the computing power server…obtaining index information of a first server”, and “instructing, based on the index information, the first server to perform the computing power task for the second communication device or provide the service to the second communication device” are directed to mere data gathering activities which is a category of insignificant extra-solution activities (MPEP 2106.05(g)). The limitations “the server query request being comprised in an edns-client-subnet (ECS) Option” and “wherein the first server satisfies at least one of the followings: the first server satisfies a requirement of query auxiliary information included in the server query request; a physical distance between the first server and a second communication device is the shortest; a routing distance between the first server and the second communication device is the shortest or a delay between the first server and the second communication device is the smallest; or an available computing power status of the first server satisfies the computing power requirement information of the computing power task or the computing power requirement information of the service” merely describe attributes of the technological environment in with the abstract idea is operating. The courts have identified that generally linking the use of a judicial exception into a technological environment do not integrate a judicial exception into a practical application (MPEP 2106.04(d)(I)). The limitations "performed by a first communication device" and “from the computing power server” apply a judicial exception on a generic computer. "Alappat 's rationale that an otherwise ineligible algorithm or software could be made patent-eligible by merely adding a generic computer to the claim was superseded by the Supreme Court's Bilski and Alice Corp. decisions" so therefore applying judicial exceptions on a communication device or computing power server which are generic computers do not integrate the judicial exceptions into a practical application (MPEP 2106.05(b)) In step 2B of the 101 analysis, the examiner has determined that the additional elements, alone or in combination do not recite significantly more than the abstract ideas identified above for the following rationale: The limitations "obtaining description information of a computing power task or description information of a service", “performing a first operation according to the description information of the computing power task or the description information of the service, wherein the first operation comprises: obtaining computing power requirement information of the computing power task, or computing power requirement information of the service… the server query request being configured to request to query a computing power server; sending the server query request to the computing power server…obtaining index information of a first server”, and “instructing, based on the index information, the first server to perform the computing power task for the second communication device or provide the service to the second communication device” represent insignificant, extra-solution activities. The limitations "obtaining description information of a computing power task or description information of a service", “performing a first operation according to the description information of the computing power task or the description information of the service, wherein the first operation comprises: obtaining computing power requirement information of the computing power task, or computing power requirement information of the service… the server query request being configured to request to query a computing power server; sending the server query request to the computing power server…obtaining index information of a first server”, and “instructing, based on the index information, the first server to perform the computing power task for the second communication device or provide the service to the second communication device” are well-understood, routine, or conventional because they are directed to "receiving or transmitting data" (MPEP 2106.05(d)). These are additional elements that the courts have recognized as well understood, routine, or conventional (MPEP 2106.05(d)). The citation of court cases in the MPEP meets the Berkheimer evidentiary burden since citation of a court case in the MPEP is one of the 4 types of evidentiary support that can be used to prove that the additional elements are well-understood, routine, or conventional (see 125 USPQ2d 1649 Berkheimer v. HP, Inc.). Thus, the limitations do not amount to significantly more than the abstract idea. The limitations “the server query request being comprised in an edns-client-subnet (ECS) Option” and “wherein the first server satisfies at least one of the followings: the first server satisfies a requirement of query auxiliary information included in the server query request; a physical distance between the first server and a second communication device is the shortest; a routing distance between the first server and the second communication device is the shortest or a delay between the first server and the second communication device is the smallest; or an available computing power status of the first server satisfies the computing power requirement information of the computing power task or the computing power requirement information of the service” merely describe attributes of the technological environment and therefore do not amount to significantly more than the exception itself (MPEP 2106.05(h)). The limitations "performed by a first communication device" and “from the computing power server” apply judicial exceptions on a generic computer and therefore do not provide significantly more. As per claim 8, it is a first communication device claim of claim 1, so it is rejected for similar reasons. Additionally, claim 8 recites “a first communication device, comprising: a memory storing a computer program; and a processor coupled to the memory and configured to execute the computer program to perform operations” which recite generic computing components that neither integrate the judicial exceptions into a practical application nor recite significantly more. As per claim 15, it is a non-transitory computer-readable storage medium claim of claim 1, so it is rejected for similar reasons. Additionally, claim 15 recites “ non-transitory computer-readable storage medium, storing a computer program, when the computer program is executed by a processor of a first communication device, causes the processor to perform operations” which recite generic computing components that neither integrate the judicial exceptions into a practical application nor recite significantly more. As per claim 2 (and similarly for claims 9 and 16), it recites attributes of the technological environment and insignificant extra solution activities that are well understood, routine, or conventional because they are directed to "receiving or transmitting data" (MPEP 2106.05(d)). Therefore, the additional elements neither integrate the judicial exceptions into a practical application nor recite significantly more. As per claim 3 (and similarly for claims 10 and 17), it recites attributes of the technological environment that neither integrate the judicial exceptions into a practical application nor recite significantly more. As per claim 4 (and similarly for claims 11 and 18), it recites attributes of the technological environment and insignificant extra solution activities that are well understood, routine, or conventional because they are directed to "receiving or transmitting data" (MPEP 2106.05(d)). Therefore, the additional elements neither integrate the judicial exceptions into a practical application nor recite significantly more. As per claim 5 (and similarly for claims 12 and 19), it recites insignificant extra solution activities that are well understood, routine, or conventional because they are directed to "receiving or transmitting data" (MPEP 2106.05(d)). Therefore, the additional elements neither integrate the judicial exceptions into a practical application nor recite significantly more. As per claim 6 (and similarly for claims 13 and 20), it recites a mental process, attributes of the technological environment, and insignificant extra solution activities that are well understood, routine, or conventional because they are directed to "receiving or transmitting data" (MPEP 2106.05(d)). Therefore, the additional elements neither integrate the judicial exceptions into a practical application nor recite significantly more. As per claim 7 (and similarly for claim 14), it recites attributes of the technological environment that neither integrate the judicial exceptions into a practical application nor recite significantly more. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Matthes et al. (US 20150040136 A1 hereinafter Matthes), in view of Moon et al. (US 20150026306 A1 hereinafter Moon), and further in view of Zhu et al. (US 20230006965 A1 hereinafter Zhu). Matthes and Moon were cited in a prior office action. As per claim 1, Matthes teaches an information processing method, performed by a first communication device ([0033] FIG. 3 is a block diagram illustrating a system constraints-aware HMP scheduling system 300 in accordance with embodiments of the present disclosure. HMP scheduling system 300 includes an HMP scheduler 302 that is communicatively coupled between task queue 304 and MCP (multiple core processor) 320.), comprising: obtaining description information of a computing power task or description information of a service (Fig. 3; [0038] In various embodiments, the system constraints-aware bias generator 316 is arranged to receive the task identifiers from the HMP scheduler 302; [0003] A heterogeneous computing architecture is designed to address asymmetric workloads by scheduling processing intensive tasks on bigger cores that have more complex features and higher speeds (while typically consuming more power) and by scheduling tasks with lighter workloads on simpler, smaller and more power efficient cores); performing a first operation according to the description information of the computing power task or the description information of the service, wherein the first operation comprises: obtaining computing power requirement information of the computing power task, or computing power requirement information of the service ([0038] The system constraints-aware bias generator 316 uses the received task identifiers to make record requests of the heuristics library to determine the parameter constraints and core ID (and other such indexed information) associated with each received task identifiers; [0027] constraints (such as thermal or power related constraints); [0037] The performance constraints 350 can include one or more threshold values X1, X2, X3, and so on for user mode values, one or more threshold values L1, L2, L3, L4, L5, and so on for battery level ranges; [0070] The hint generator may include a heuristics library receiving processing tasks from a task queue and receiving data of performance constraint parameters measured at any given point in time. The heuristics library performs a system constraints-aware function to arrange the processing tasks as a function of predetermined system constraints); determining a request according to the computing power requirement information of the computing power task or the computing power requirement information of the service ([0038] The system constraints-aware bias generator 316 uses the received task identifiers to make record requests of the heuristics library to determine the parameter constraints and core ID (and other such indexed information) associated with each received task identifiers; [0027] constraints (such as thermal or power related constraints); [0037] The performance constraints 350 can include one or more threshold values X1, X2, X3, and so on for user mode values, one or more threshold values L1, L2, L3, L4, L5, and so on for battery level ranges). Matthes fails to teach determining a server query request according to the computing power requirement information of the computing power task or the computing power requirement information of the service, the server query request being configured to request to query a computing power server; sending the server query request to the computing power server, the server query request being comprised in an edns-client-subnet (ECS) Option; and obtaining, from the computing power server, index information of a first server, wherein the first server satisfies at least one of the followings: the first server satisfies a requirement of query auxiliary information included in the server query request; a physical distance between the first server and a second communication device is the shortest; a routing distance between the first server and the second communication device is the shortest or a delay between the first server and the second communication device is the smallest; or an available computing power status of the first server satisfies the computing power requirement information of the computing power task or the computing power requirement information of the service; and instructing, based on the index information, the first server to perform the computing power task for the second communication device or provide the service to the second communication device. However, Moon teaches determining a server query request according to the computing power requirement information of the computing power task or the computing power requirement information of the service, the server query request being configured to request to query a computing power server; sending the server query request to the computing power server ([0039] In addition, the virtualization server 120 includes an agent 124 for receiving a request from the scheduler 113 of the management server 110 to allocate the virtual machines. The agent 124 may deliver usage amount information of various resources (a central processor unit (CPU), a memory, a network, and a disk) of the virtualization server 120, and allocation states and power usage statues of the virtual machines 121 to the database 114 of the management server 110; [0015] The virtual desktop service providing method may further include, when a virtual desktop service request is received from the user, predicting a server power usage amount in accordance with a usage pattern of the user, selecting a virtualization server based on the server power usage amount, and transmitting driving commands on virtual machines to the selected virtualization server; [0041] a user requests a virtual desktop service from the management server 110 through a communication network (S201); [0042] First, the management server 110 determines whether a usage pattern of a user is input to the database 114 (S202) to predict a server power usage amount in accordance with the input usage pattern (S203)); and obtaining, from the computing power server, index information of a first server, wherein the first server satisfies at least one of the followings: the first server satisfies a requirement of query auxiliary information included in the server query request; a physical distance between the first server and a second communication device is the shortest; a routing distance between the first server and the second communication device is the shortest or a delay between the first server and the second communication device is the smallest; or an available computing power status of the first server satisfies the computing power requirement information of the computing power task or the computing power requirement information of the service; and instructing, based on the index information, the first server to perform the computing power task for the second communication device or provide the service to the second communication device (Fig. 1; [0007] Therefore, according to an exemplary embodiment of the present invention, a method and an apparatus for allocating a virtual machine to provide a virtual desktop service in accordance with distances between virtualization servers and a user; [0009] Selecting the virtualization server may include selecting a virtualization server with the shortest network distance among the plurality of virtualization servers; [0057] In this case, when a specific user requests a virtual desktop service for corresponding work, the management server 110 may select a virtualization server 120 designated in accordance with a work type of a specific user. Therefore, users that perform similar work are grouped to use virtual machines allocated to the same virtualization server 120 and to share a CPU, a memory, and cache; [0052] a server with smallest power consumption may be determined as the virtualization server 120 with highest performance; [0039] In addition, the virtualization server 120 includes an agent 124 for receiving a request from the scheduler 113 of the management server 110 to allocate the virtual machines. The agent 124 may deliver usage amount information of various resources (a central processor unit (CPU), a memory, a network, and a disk) of the virtualization server 120, and allocation states and power usage statues of the virtual machines 121 to the database 114 of the management server 110; [0060] That is, in an environment where the virtualization servers that provide the virtual desktop service are locally dispersed, the virtual machines may be allocated to a virtualization server with a shortest network delay based on network distances from the virtualization servers to the user. In addition, after a server power usage amount is predicted in accordance with a virtual machine usage type of the user, a virtualization server may be selected in accordance with the prediction result; [0041] a user requests a virtual desktop service from the management server 110 through a communication network; [0015] The virtual desktop service providing method may further include, when a virtual desktop service request is received from the user, predicting a server power usage amount in accordance with a usage pattern of the user, selecting a virtualization server based on the server power usage amount, and transmitting driving commands on virtual machines to the selected virtualization server; [0058] Referring to FIG. 3, the scheduler 113 of the management server 110 then selects the virtualization server 120, and the agent 124 of the virtualization server 120 receives a request from the scheduler 113 to allocate virtual machines to the virtualization server 120 (S301). [0059] Then, the service provider 115 of the management server 110 may provide a virtual desktop service to the user through the virtual machines performed by the selected virtualization server 120 (S302).). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Mattes with the teachings of Moon to reduce delay (see Moon [0046] Then, the management server 110 counts the number of virtualization servers 120 closest to the user on the network (with shortest network delay) (S207), and when one virtualization server 120 closest to the user exists, the virtualization server 120 is selected (S208).). Matthes and Moon fail to teach the server query request being comprised in an edns-client-subnet (ECS) Option. However, Zhu teaches the server query request being comprised in an edns-client-subnet (ECS) Option ([0114] According to existing extension mechanisms for DNS (Extension Mechanisms for DNS, EDNS), an EDNS client subnet option (ECS (EDNS Client Subnet) option) may be added to a DNS query request, and the ECS option includes an IP address of a client, so that a DNS server better determines, based on the IP address of the client, an IP address corresponding to a domain name that the client requests to query for.). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Matthes and Moon with the teachings of Zhu to better match the client with an IP address (see Zhu [0114] According to existing extension mechanisms for DNS (Extension Mechanisms for DNS, EDNS), an EDNS client subnet option (ECS (EDNS Client Subnet) option) may be added to a DNS query request, and the ECS option includes an IP address of a client, so that a DNS server better determines, based on the IP address of the client, an IP address corresponding to a domain name that the client requests to query for.). As per claim 2, Matthes, Moon, and Zhu teach the information processing method according to claim 1. Matthes teaches wherein the first operation is performed when a first condition is satisfied, and the first condition comprises at least one of the following: an allocation request is obtained from the second communication device, wherein the allocation request is used to request to allocate computing power to the computing power task or the service ([0036] The supplied biased task information is used by the HMP scheduler 302 to allocate processing tasks so that the HMP scheduler assigns tasks in accordance with dynamic changes of the performance constraint parameters in view of (e.g., static or dynamic) system constraints; [0038] In various embodiments, the system constraints-aware bias generator 316 is arranged to receive the task identifiers from the HMP scheduler 302 and performance constraint parameters. The system constraints-aware bias generator 316 uses the received task identifiers to make record requests of the heuristics library to determine the parameter constraints and core ID (and other such indexed information) associated with each received task identifiers. The system constraints-aware bias generator 316 uses information from the received records to generate a hint (e.g., a BCBT) that, for example, suggests which core should be targeted for performing the associated task ID; [0037] The performance constraints 350 can include one or more threshold values X1, X2, X3, and so on for user mode values, one or more threshold values L1, L2, L3, L4, L5, and so on for battery level ranges; [0070] The disclosed subject matter further contemplates a method of arranging a scheduler to allocate one or more processing tasks to a plurality of heterogeneous processor cores; [0022] The heterogeneous cores may include a combination of processors of varying processing power; [0004] The scheduler may be communicatively coupled with each of the plurality of processor cores). Additionally, Moon teaches a query request is obtained from the second communication device, wherein the query request is used to request to query the computing power server ([0039] In addition, the virtualization server 120 includes an agent 124 for receiving a request from the scheduler 113 of the management server 110 to allocate the virtual machines. The agent 124 may deliver usage amount information of various resources (a central processor unit (CPU), a memory, a network, and a disk) of the virtualization server 120, and allocation states and power usage statues of the virtual machines 121 to the database 114 of the management server 110; [0041] a user requests a virtual desktop service from the management server 110 through a communication network; [0057] In this case, when a specific user requests a virtual desktop service for corresponding work, the management server 110 may select a virtualization server 120 designated in accordance with a work type of a specific user. Therefore, users that perform similar work are grouped to use virtual machines allocated to the same virtualization server 120 and to share a CPU, a memory, and cache. [0058] Referring to FIG. 3, the scheduler 113 of the management server 110 then selects the virtualization server 120, and the agent 124 of the virtualization server 120 receives a request from the scheduler 113 to allocate virtual machines to the virtualization server 120 (S301).). As per claim 3, Matthes, Moon, and Zhu teach the information processing method according to claim 1. Moon teaches wherein the server query request comprises at least one of the following: the computing power requirement information of the computing power task or the computing power requirement information of the service ([0039] In addition, the virtualization server 120 includes an agent 124 for receiving a request from the scheduler 113 of the management server 110 to allocate the virtual machines. The agent 124 may deliver usage amount information of various resources (a central processor unit (CPU), a memory, a network, and a disk) of the virtualization server 120, and allocation states and power usage statues of the virtual machines 121 to the database 114 of the management server 110; [0015] The virtual desktop service providing method may further include, when a virtual desktop service request is received from the user, predicting a server power usage amount in accordance with a usage pattern of the user, selecting a virtualization server based on the server power usage amount). Additionally, Zhu teaches terminal location information; network-selected user plane information; or data network access identifier (DNAI) information ([0126] the location information of the terminal device; [0075] The UPF (User Plane Function) network element; [0106] For example, the location information of the at least one application platform on which the first application is deployed may include at least one data network access identifier (DN Access Identifier, DNAI)). As per claim 4, Matthes, Moon, and Zhu teach the information processing method according to claim 1. Matthes teaches further comprising: obtaining computing power status information, wherein the computing power status information comprises at least one of the following: a computing power remaining status or a computing power available status; total computing power; a computing power use status; a predicted future computing power use status; or a computing power use status in a predetermined period of time ([0041] For example, as a performance constraint parameter of any type (such as battery level) approaches a threshold (such as 10 percent charge remaining; [0029] The ranges of the performance constraint parameters can be used describe a multidimensional operating envelope that encompasses changes in thermal constraints (such as changes in case and/or junction temperatures), changes in the battery level of the device, a change in a selected user mode of the device, and the like. Various selected performance constraint parameters can be independently monitored by measuring (and/or calculating) the performance constraint parameters at various points in time during operation of the scheduler and/or in response to system, user-caused, and/or external events.). As per claim 5, Matthes, Moon, and Zhu teach the information processing method according to claim 1. Moon teaches wherein the first operation further comprises: sending the index information of the first server ([0010] In the virtual desktop service providing method, selecting the virtualization server may include, when m virtualization servers with the shortest network distance exist among the plurality of virtualization servers, comparing resource usage rates of the m virtualization servers with each other and selecting a virtualization server with the smallest resource usage rate; [0039] The agent 124 may deliver usage amount information of various resources (a central processor unit (CPU), a memory, a network, and a disk) of the virtualization server 120, and allocation states and power usage statues of the virtual machines 121 to the database 114 of the management server 110.). As per claim 6, Matthes, Moon, and Zhu teach the information processing method according to claim 5. Moon teaches wherein the first server further satisfies at least one of the following: the first server satisfies one or more conditions indicated by a query request obtained from the second communication device, wherein the query request is used to request to query the computing power server; or the first server successfully allocates a computing power resource in response to a resource allocation request, wherein the resource allocation request is used to request to allocate or reserve the computing power resource for the computing power task or the service ([0010] In the virtual desktop service providing method, selecting the virtualization server may include, when m virtualization servers with the shortest network distance exist among the plurality of virtualization servers, comparing resource usage rates of the m virtualization servers with each other and selecting a virtualization server with the smallest resource usage rate; [0052] a server with smallest power consumption may be determined as the virtualization server 120 with highest performance; [0039] In addition, the virtualization server 120 includes an agent 124 for receiving a request from the scheduler 113 of the management server 110 to allocate the virtual machines. The agent 124 may deliver usage amount information of various resources (a central processor unit (CPU), a memory, a network, and a disk) of the virtualization server 120, and allocation states and power usage statues of the virtual machines 121 to the database 114 of the management server 110; [0060] That is, in an environment where the virtualization servers that provide the virtual desktop service are locally dispersed, the virtual machines may be allocated to a virtualization server with a shortest network delay based on network distances from the virtualization servers to the user. In addition, after a server power usage amount is predicted in accordance with a virtual machine usage type of the user, a virtualization server may be selected in accordance with the prediction result; [0015] The virtual desktop service providing method may further include, when a virtual desktop service request is received from the user, predicting a server power usage amount in accordance with a usage pattern of the user, selecting a virtualization server based on the server power usage amount, and transmitting driving commands on virtual machines to the selected virtualization server; [0058] Referring to FIG. 3, the scheduler 113 of the management server 110 then selects the virtualization server 120, and the agent 124 of the virtualization server 120 receives a request from the scheduler 113 to allocate virtual machines to the virtualization server 120 (S301). [0059] Then, the service provider 115 of the management server 110 may provide a virtual desktop service to the user through the virtual machines performed by the selected virtualization server 120 (S302); [0057] In this case, when a specific user requests a virtual desktop service for corresponding work, the management server 110 may select a virtualization server 120 designated in accordance with a work type of a specific user; [0041] a user requests a virtual desktop service from the management server 110 through a communication network;). As per claim 7, Matthes, Moon, and Zhu teach the information processing method according to claim 6. Moon teaches wherein the fifth request comprises at least one of the following: index information of a candidate server; a resource allocation request identifier; a computing power resource status occupied in the request; computing power task completion time or service completion time; computing power task start time or service start time; or the description information of the computing power task or the description information of the service ([0010] In the virtual desktop service providing method, selecting the virtualization server may include, when m virtualization servers with the shortest network distance exist among the plurality of virtualization servers, comparing resource usage rates of the m virtualization servers with each other and selecting a virtualization server with the smallest resource usage rate; [0039] The agent 124 may deliver usage amount information of various resources (a central processor unit (CPU), a memory, a network, and a disk) of the virtualization server 120, and allocation states and power usage statues of the virtual machines 121 to the database 114 of the management server 110.). As per claim 8, it is a first communication device claim of claim 1, so it is rejected for similar reasons. Additionally, Matthes teaches a first communication device, comprising: a memory storing a computer program; and a processor coupled to the memory and configured to execute the computer program to perform operations ([0033] FIG. 3 is a block diagram illustrating a system constraints-aware HMP scheduling system 300 in accordance with embodiments of the present disclosure. HMP scheduling system 300 includes an HMP scheduler 302 that is communicatively coupled between task queue 304 and MCP (multiple core processor) 320; [0015] In some example embodiments, the computing device 100 comprises a megacell or a system-on-chip (SOC) which includes control logic such as a CPU 112 (Central Processing Unit), a storage 114 (e.g., random access memory (RAM)); [0016] The storage 114 (which can be memory such as RAM, flash memory, or disk storage) stores one or more software applications 130 (e.g., embedded applications) that, when executed by the CPU 112, perform any suitable function associated with the computing device 100.). As per claim 9, it is a first communication device claim of claim 2, so it is rejected for similar reasons. As per claim 10, it is a first communication device claim of claim 3, so it is rejected for similar reasons. As per claim 11, it is a first communication device claim of claim 4, so it is rejected for similar reasons. As per claim 12, it is a first communication device claim of claim 5, so it is rejected for similar reasons. As per claim 13, it is a first communication device claim of claim 6, so it is rejected for similar reasons. As per claim 14, it is a first communication device claim of claim 7, so it is rejected for similar reasons. As per claim 15, it is a non-transitory computer-readable storage medium claim of claim 1, so it is rejected for similar reasons. Additionally, Matthes teaches a non-transitory computer-readable storage medium, storing a computer program, when the computer program is executed by a processor of a first communication device, causes the processor to perform operations ([0019] tangible (e.g., "non-transitory") media (such as flash memory); [0033] FIG. 3 is a block diagram illustrating a system constraints-aware HMP scheduling system 300 in accordance with embodiments of the present disclosure. HMP scheduling system 300 includes an HMP scheduler 302 that is communicatively coupled between task queue 304 and MCP (multiple core processor) 320; [0015] In some example embodiments, the computing device 100 comprises a megacell or a system-on-chip (SOC) which includes control logic such as a CPU 112 (Central Processing Unit), a storage 114 (e.g., random access memory (RAM)); [0016] The storage 114 (which can be memory such as RAM, flash memory, or disk storage) stores one or more software applications 130 (e.g., embedded applications) that, when executed by the CPU 112, perform any suitable function associated with the computing device 100.). As per claim 16, it is a non-transitory computer-readable storage medium claim of claim 2, so it is rejected for similar reasons. As per claim 17, it is a non-transitory computer-readable storage medium claim of claim 3, so it is rejected for similar reasons. As per claim 18, it is a non-transitory computer-readable storage medium claim of claim 4, so it is rejected for similar reasons. As per claim 19, it is a non-transitory computer-readable storage medium claim of claim 5, so it is rejected for similar reasons. As per claim 20, it is a non-transitory computer-readable storage medium claim of claim 6, so it is rejected for similar reasons. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HSING CHUN LIN whose telephone number is (571)272-8522. The examiner can normally be reached Mon - Fri 9AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /H.L./Examiner, Art Unit 2195 /Aimee Li/Supervisory Patent Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Jun 27, 2023
Application Filed
Sep 25, 2025
Non-Final Rejection — §101, §103, §112
Dec 22, 2025
Response Filed
Jan 10, 2026
Final Rejection — §101, §103, §112
Mar 16, 2026
Response after Non-Final Action
Mar 31, 2026
Request for Continued Examination
Apr 01, 2026
Response after Non-Final Action
Apr 03, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12554523
REDUCING DEPLOYMENT TIME FOR CONTAINER CLONES IN COMPUTING ENVIRONMENTS
2y 5m to grant Granted Feb 17, 2026
Patent 12547458
PLATFORM FRAMEWORK ORCHESTRATION AND DISCOVERY
2y 5m to grant Granted Feb 10, 2026
Patent 12468573
ADAPTIVE RESOURCE PROVISIONING FOR A MULTI-TENANT DISTRIBUTED EVENT DATA STORE
2y 5m to grant Granted Nov 11, 2025
Patent 12461785
GRAPHIC-BLOCKCHAIN-ORIENTATED SHARDING STORAGE APPARATUS AND METHOD THEREOF
2y 5m to grant Granted Nov 04, 2025
Patent 12443425
ISOLATED ACCELERATOR MANAGEMENT INTERMEDIARIES FOR VIRTUALIZATION HOSTS
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
59%
Grant Probability
99%
With Interview (+79.8%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 108 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month