DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
1. Claims 1-5, 7, and 16 are currently amended.
2. Claims 1-19 are pending.
3. Claims 1-19 are rejected.
Information Disclosure Statement
4. The information disclosure statement (IDS) submitted on February 5, 2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
5. Regarding 35 U.S.C. 112b Rejections:
Applicant’s amendments and arguments with respect to the objections to the 35 U.S.C. 112b rejections of the invention have been fully considered and are persuasive. The 35 U.S.C. 112b rejections have been withdrawn.
6. Regarding Prior Art Rejections:
Applicant’s amendments and arguments to claims 1, 9, and 17 have been considered and are not persuasive. The rejections under 35 U.S.C. 103 are maintained. Additionally, applicant’s arguments are rejected under a new ground of rejection necessitated by the amendment.
7. Applicant argues in remarks:
Applicant submits that neither Yalamanchili nor Barsness, alone or in any proper combination, teaches or suggests the amended features recited in independent claim 1.
Applicant has amended the independent claims to limit the feature of "modifying the system configuration by using a neural network model trained to obtain information regarding the system configuration optimized for acceleration for each of the plurality of events generated by the plurality of applications." The present invention has a differentiating feature from the cited arts in that it goes beyond merely changing the system configuration based on an event for accelerating execution of an application, by supplying the changes to an AI/data-based model to enable self-learning, and based on the result of this learning, dynamically allocating an optimal performance configuration by adapting to real-time system states (such as load and temperature).
Regarding the feature of feeding, by the electronic device, the modified system configuration for each event to a data driven model for self-learning over a period of time in claim 2, the Office cited the following paragraph from Yalamanchili [0054]: In the auto-reconfiguration mode, the user is not involved in the build selection process. Instead, the multi-core CPU 120 of wireless mobile device 100 learns the usage pattern of the end user and intelligently selects a build file of a corresponding hardware accelerator available from the on-line hardware configuration store. Once the build file is downloaded, the hardware controller 110 may automatically reconfigure the configurable co-processor core(s) 150 for operation based on the use-case(s)/usage- pattern. This auto-reconfiguration can take place in the background to reduce/hide lag/delays associated with the configuration changes.
However, Applicant submits that claim 2 relates to modifying the system configuration and then training a neural network model by using the modified system configuration, whereas the above amendment is to explicitly specify that the system configuration can be modified by using a trained neural network model.
Furthermore, the paragraph from Yalamanchili merely relates to the feature that the user is not involved in the build selection process, and instead, the processor learns the usage pattern of the user and, based thereon, selects a build file from a store, and downloads the selected build file to automatically reconfigure the configurable co- processor core(s).
In other words, the learning in Yalamanchili is only related to the usage pattern of the user and is distinguishable from the feature of updating/modifying the system configuration itself based on the learning.
Moreover, Applicant submits that Yalamanchili discloses that the CPU learns the usage pattern of the user, which is merely an operation based on information regarding the usage pattern of the user, and is deemed to be distinguishable from the learning of a neural network model or an Al model.
Not only Yalamanchili, but also Barsness does not disclose the feature of modifying the system configuration by obtaining information regarding the system configuration optimized for acceleration for each of the plurality of events by using a neural network model.
Therefore, Applicant submits that Barsness fails to cure the above noted deficiencies of Yalamanchili.
Thus, Applicant submits that neither Yalamanchili nor Barsness, alone or in any proper combination, teaches or suggests the amended features recited in independent claim 1.
8. With the newly amended claims, the overall scope of the claim does not read the same way it did before. Therefore, new art and combination thereof was introduced to better suit the new scope of the claims.
9. Although Yalamanchili and Barsness do not explicitly teach of using a neural network model, Yalamanchili teaches of using a user’s usage patterns in the auto-reconfiguration mode. Additionally, the user is not involved in the selection process, and the CPU learns the usage pattern of the user and selects a corresponding hardware accelerator. The usage of user patterns is often seen in machine learning methods and processes, which would be obvious to one of ordinary skill in the art. In prior art, Aseev et al. US 20180232245 A1 teaches:
[0045] The device and user information received by data collection module 122 is then stored as files 116 in device information database 114 where it can be classified based on different criteria at step 310. Next, the collected device and user data can be accessed by the analysis module 124, in which the further data processing can be performed at step 315. Based on the analysis of the collected device and user data, the analysis module 124 can automatically build one or more usage patterns 128 or a set of different patterns. For example, in an exemplary aspect, the analysis module 124 can develop a prediction model using the information collected by the data collection module 122 discussed above. In this aspect, the prediction model is trained by a specific of machine learning algorithm (e.g., random forest, SVM, neural network), where the training set is based on the collected information. In one aspect, if a new user is identified, the device information and user information of the new user will be provided as an input to the machine learning algorithm, which will predict the suitable software or system configurations for the user based on the prediction model. In one aspect, there are two stages to predict the configuration. In the first stage, basic configuration is created during the installation on the new device based on the usage patterns and characteristics that are immediately available such as CPU/RAM usage and amount, network throughput and the like. In the second stage, the configuration information is updated (i.e., tuned) according to a predetermined frequency based on new usage data collected and then the machine learning prediction model is retrained based on the new information to predict suitable configurations.).
10. Aseev teaches of automatically configuring and adjusting computer systems based on data collection and analysis to create new usage patterns. The method then compares device and user information of the computing device with the usage patterns associated with the existing devices to identify an optimal configuration for the computing device. The optimal configuration is used to generate installation instructions and configurations. Finally, the system settings of the computing device are automatically configured based on the instructions and configurations (Abstract). The prediction model is trained by a machine learning algorithm, which can be a neural network model. By automatically configuring configurations based on training a neural network model using user usage patterns, this process helps optimize software installation and configuration setting that meet a device user's specific needs and desired settings and functionality, as discussed in Aseev ([0005]). Therefore, it would be obvious to one of ordinary skill in the art that inputting user usage patterns into machine learning algorithms, such as neural network models, enables an optimal configuration to be generated.
Additionally, claims 2-6, 8-15, and 17-19 depend from and further limit amended claims 1, 7, and 16 and are therefore also rejected under 35 U.S.C 103. The full rejection can be found in the 35 U.S.C. 103 rejection section below.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
11. Claims 1-10 and 14-19 are rejected under 35 USC 103 as being unpatentable over Yalamanchili US 20140281472 A1, in view of Barsness et al. US 20130159745 A1, and in further view of Aseev et al. US 20180232245 A1.
12. With regard to claim 1, Yalamanchili teaches:
A method for handling at least one resource operating performance configuration in an electronic device ([0012] According to another aspect of the present disclosure, a method for reconfiguration of co-processor cores for general purpose processors is described. The method includes selecting from a set of hardware accelerators according to a user's use pattern. The method also includes reconfiguring the configurable co-processor core(s) of a general purpose processor according to a selected hardware accelerator.), the method comprising:
determining, by the electronic device, at least one event that is generated by at least one application among a plurality of applications running in the electronic device that requires acceleration of the at least one application from the plurality of applications running in the electronic device (0008] In some architectures, special-purpose processors that are often referred to as "accelerators" are implemented to perform certain types of operations. For example, a processor executing a program may offload certain types of operations to an accelerator that is configured to perform those types of operations efficiently. Such hardware acceleration employs hardware to perform some function faster than is possible in software running on a normal (general-purpose) CPU. Hardware accelerators may be designed for computationally intensive software code. Depending upon granularity, hardware acceleration can vary from small functional units to large functional blocks. Examples of such hardware acceleration include blitting acceleration functionality in graphics processing units (GPUs) and instructions for complex operations in CPUs; [0049] In this configuration, a CPU can profile a user's usage in the background and auto-reconfigure the device, from the list of available builds, based on a determined usage pattern of the user. The reconfiguration may be done each time a user switches from one intensive task to another. To prevent large amounts of any associated reconfiguration overhead, the algorithm can adjust itself to reconfigure after a duration of time or usage, which may be predetermined. Reconfiguration decisions can be dependent on the type of configurable blocks and recommendations on reconfiguration; Examiner’s Note: Although “events” are not specifically mentioned, there are instances of intensive computations which require acceleration. These instances are similar to the events. The algorithm is able to adjust itself to reconfigure after a duration of time or usage.);
fetching, by the electronic device, a system configuration based on the determination ([0038] The configurable co-processor core(s) 150 can be reconfigured to a desired hardware structure by acquiring appropriate build files by for example, downloading from an on-line hardware configuration store as shown in FIG. 2; [0046] At block 712, a build file corresponding to the selected hardware accelerator is downloaded from the on-line hardware configuration store. At block 714, the build file corresponding to the selecting hardware accelerator is loaded into a configurable co-process core of a wireless mobile device to operate as the selected hardware engine; Fig. 7, 712 Download a build file corresponding to the selected hardware accelerator from the on-line hardware configuration store; Examiner’s Note: Downloading is analogous with fetching.);
modifying, by the electronic device, the system configuration for each event the at least one event to accelerate execution of each event the at least one event associated with the at least one application running in the electronic device by using a neural network model trained to obtain information regarding the system configuration optimized for acceleration for each of the plurality of events generated by the plurality of applications ([0008]; [0038] The configurable co-processor core(s) 150 can be reconfigured to a desired hardware structure by acquiring appropriate build files by for example, downloading from an on-line hardware configuration store as shown in FIG. 2. The hardware configuration store may store all the configuration builds currently available for the configurable co-processor core(s) 150. These builds can be changed (including bug fixes, adding new builds, etc.) similar to the software apps in an app store; [0048] In another aspect of the disclosure, an automated process for reconfiguration of a configurable co-processor core is described. In particular, some users may not want to manually switch between hardware engines of interest. These users do not want to manually reconfigure the configurable co-processor cores) to hardware engines of interest. Other users may not even recognize the importance of this reconfiguration. That is, manual reconfiguration might make things more efficient, but automated reconfiguration may be more comfortable. In addition, although a user's use-cases of interest may change over time, most users have a pattern that can be learned over time; [0049] In this configuration, a CPU can profile a user's usage in the background and auto-reconfigure the device, from the list of available builds, based on a determined usage pattern of the user; [0054] In the auto-reconfiguration mode, the user is not involved in the build selection process. Instead, the multi-core CPU 120 of wireless mobile device 100 learns the usage pattern of the end user and intelligently selects a build file of a corresponding hardware accelerator available from the on-line hardware configuration store; Examiner’s Note: The builds (configuration) can be changed. The events are instances of intensive computations which require acceleration. The reconfiguration (modifying) can be automated, without need for user selection.); and
accelerating, by the electronic device, execution of the at least one event associated with the at least one application running in the electronic device based on the modified system configuration ([0008] Such hardware acceleration employs hardware to perform some function faster than is possible in software running on a normal (general-purpose) CPU.).
Although Yalamanchili teaches operation specific accelerators for executing specific operations/tasks ([0008] In some architectures, special-purpose processors that are often referred to as "accelerators" are implemented to perform certain types of operations. For example, a processor executing a program may offload certain types of operations to an accelerator that is configured to perform those types of operations efficiently. Such hardware acceleration employs hardware to perform some function faster than is possible in software running on a normal (general-purpose) CPU. Hardware accelerators may be designed for computationally intensive software code. Depending upon granularity, hardware acceleration can vary from small functional units to large functional blocks. Examples of such hardware acceleration include blitting acceleration functionality in graphics processing units (GPUs) and instructions for complex operations in CPUs), Yalamanchili fails to explicitly teach encountering these events and a plurality of applications running.
However, in analogous art, Barsness teaches:
determining, by the electronic device, at least one event that is generated by at least one application among a plurality of applications running in the electronic device that requires acceleration of the at least one application from the plurality of applications running in the electronic device ([0026] ...application programs running on adjacent nodes; [0040] An example of the resource scheduler determining where to accelerate a section of an application based on an application profile will now be described. In this case, it is assumed that the application begins execution on a front-end system and runs until hooks for the accelerated portion are encountered; Examiner’s Note: The application begins execution until it runs into hooks for the accelerated portion, is an event that is generated by application(s) running on the device that require acceleration. There are multiple applications running on the system in parallel on adjacent nodes.);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Yalamanchili with the teachings of Barsness where determining, by the electronic device, at least one event that is generated by at least one application among a plurality of applications running in the electronic device that requires acceleration of the at least one application from the plurality of applications running in the electronic device. Both Yalamanchili and Barsness teach of modifying system configurations in order to accelerate sections/tasks of an application(s) that is running on the system. Yalamanchili does not explicitly teach events, just instances of intensive computation for acceleration. Barsness however, specifically mentions the hook that indicates the need for acceleration of an application causes the system to then determine whether it is needed or not, as discussed in Barsness ([0040]). This is an event. If needed, the application is moved to the multi-node computer system instead of remaining on the front-end computer system. This changes system configuration. Therefore, the system is able to recognize when acceleration is necessary. Additionally, Barsness also teaches of multiple applications running in parallel on adjacent nodes, which Yalamanchili had also failed to teach.
Although Yalamanchili teaches of automated reconfiguration of the system configuration and selecting a hardware accelerator according to user usage patterns without needing a user to select a configuration, both Yalamanchili and Barsness fail to explicitly teach that the modifying is done by using a neural network model trained to obtain information regarding the system configuration optimized for acceleration for each of the plurality of events generated by the plurality of applications.
However, in analogous art, Aseev teaches:
by using a neural network model trained to obtain information regarding the system configuration optimized for acceleration for each of the plurality of events generated by the plurality of applications ([0045] The device and user information received by data collection module 122 is then stored as files 116 in device information database 114 where it can be classified based on different criteria at step 310. Next, the collected device and user data can be accessed by the analysis module 124, in which the further data processing can be performed at step 315. Based on the analysis of the collected device and user data, the analysis module 124 can automatically build one or more usage patterns 128 or a set of different patterns. For example, in an exemplary aspect, the analysis module 124 can develop a prediction model using the information collected by the data collection module 122 discussed above. In this aspect, the prediction model is trained by a specific of machine learning algorithm (e.g., random forest, SVM, neural network), where the training set is based on the collected information. In one aspect, if a new user is identified, the device information and user information of the new user will be provided as an input to the machine learning algorithm, which will predict the suitable software or system configurations for the user based on the prediction model. In one aspect, there are two stages to predict the configuration. In the first stage, basic configuration is created during the installation on the new device based on the usage patterns and characteristics that are immediately available such as CPU/RAM usage and amount, network throughput and the like. In the second stage, the configuration information is updated (i.e., tuned) according to a predetermined frequency based on new usage data collected and then the machine learning prediction model is retrained based on the new information to predict suitable configurations.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Yalamanchili and Barsness with the teachings of Aseev by using a neural network model trained to obtain information regarding the system configuration optimized for acceleration for each of the plurality of events generated by the plurality of applications. Similarly to Yalamanchili and Barsness, Aseev teaches of automatically configuring and adjusting computer systems based on data collection and analysis to create new usage patterns. The method then compares device and user information of the computing device with the usage patterns associated with the existing devices to identify an optimal configuration for the computing device. The optimal configuration is used to generate installation instructions and configurations. Finally, the system settings of the computing device are automatically configured based on the instructions and configurations (Abstract). The prediction model is trained by a machine learning algorithm, which can be a neural network model. By automatically configuring configurations based on training a neural network model using user usage patterns, this process helps optimize software installation and configuration setting that meet a device user's specific needs and desired settings and functionality, as discussed in Aseev ([0005]).
13. With regard to claim 2, Yalamanchili further teaches:
further comprising:
feeding, by the electronic device, the modified system configuration for each event to the neural network model, wherein each event is accelerated by setting the modified system configuration, and wherein the modified system configuration is fed to the data driven model the neural network model for self-learning over a period of time (0054] In the auto-reconfiguration mode, the user is not involved in the build selection process. Instead, the multi-core CPU 120 of wireless mobile device 100 learns the usage pattern of the end user and intelligently selects a build file of a corresponding hardware accelerator available from the on-line hardware configuration store. Once the build file is downloaded, the hardware controller 110 may automatically reconfigure the configurable co-processor core(s) 150 for operation based on the use-case(s)/usage-pattern. This auto-reconfiguration can take place in the background to reduce/hide lag/delays associated with the configuration changes; Examiner’s Note: Auto-configuration is self-learning, and reconfiguration is done.).
Yalamanchili fails to explicitly teach that the model is a neural network model.
However, in analogous art, Aseev teaches:
the neural network model ([0045] The device and user information received by data collection module 122 is then stored as files 116 in device information database 114 where it can be classified based on different criteria at step 310. Next, the collected device and user data can be accessed by the analysis module 124, in which the further data processing can be performed at step 315. Based on the analysis of the collected device and user data, the analysis module 124 can automatically build one or more usage patterns 128 or a set of different patterns. For example, in an exemplary aspect, the analysis module 124 can develop a prediction model using the information collected by the data collection module 122 discussed above. In this aspect, the prediction model is trained by a specific of machine learning algorithm (e.g., random forest, SVM, neural network), where the training set is based on the collected information. In one aspect, if a new user is identified, the device information and user information of the new user will be provided as an input to the machine learning algorithm, which will predict the suitable software or system configurations for the user based on the prediction model. In one aspect, there are two stages to predict the configuration. In the first stage, basic configuration is created during the installation on the new device based on the usage patterns and characteristics that are immediately available such as CPU/RAM usage and amount, network throughput and the like. In the second stage, the configuration information is updated (i.e., tuned) according to a predetermined frequency based on new usage data collected and then the machine learning prediction model is retrained based on the new information to predict suitable configurations.),
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Yalamanchili with the teachings of Aseev where the data driven model is a neural network model. Similarly to Yalamanchili, Aseev teaches of automatically configuring and adjusting computer systems based on data collection and analysis to create new usage patterns. The method then compares device and user information of the computing device with the usage patterns associated with the existing devices to identify an optimal configuration for the computing device. The optimal configuration is used to generate installation instructions and configurations. Finally, the system settings of the computing device are automatically configured based on the instructions and configurations (Abstract). The prediction model is trained by a machine learning algorithm, which can be a neural network model. By automatically configuring configurations based on training a neural network model using user usage patterns, this process helps optimize software installation and configuration setting that meet a device user's specific needs and desired settings and functionality, as discussed in Aseev ([0005]).
14. With regard to claim 3, Yalamanchili further teaches:
wherein the modifying, by the electronic device, of the system configuration for the at least one event to accelerate the execution of the at least one event associated with the at least one application running in the electronic device comprises:
determining, by the electronic device, a nature of a task associated with the at least one application running in the electronic device and at least one parameter associated with at least one application running in the electronic device, wherein the at least one parameter comprises at least one of a system load, temperature of the electronic device, power consumption, or an internal component temperature (Fig. 2, 200 Hardware Configuration Store; [0008] Hardware accelerators may be designed for computationally intensive software code; [0034] The DSP cores 118A and 118B, and the processor cores 120A and 120B of the multi-core CPU 120 support various functions such as video, audio, graphics, gaming, and the like; [0049] In this configuration, a CPU can profile a user's usage in the background and auto-reconfigure the device, from the list of available builds, based on a determined usage pattern of the user. The reconfiguration may be done each time a user switches from one intensive task to another; Examiner’s Note: The nature of the task can be graphics or gaming tasks. These can be seen in the hardware configuration store as different engines. The parameter can be whether or not the application is intense.);
learning, by the electronic device, at least one system configuration for the determined nature of the task associated with the at least one application running in the electronic device and the determined parameter associated with the at least one application running in the electronic device ([0054] In the auto-reconfiguration mode, the user is not involved in the build selection process. Instead, the multi-core CPU 120 of wireless mobile device 100 learns the usage pattern of the end user and intelligently selects a build file of a corresponding hardware accelerator available from the on-line hardware configuration store. Once the build file is downloaded, the hardware controller 110 may automatically reconfigure the configurable co-processor core(s) 150 for operation based on the use-case(s)/usage-pattern. This auto-reconfiguration can take place in the background to reduce/hide lag/delays associated with the configuration changes; Examiner’s Note: The parameter(s) is the user’s usage pattern that helps the system select a build file of a corresponding hardware accelerator.); and
modifying, by the electronic device, the system configuration for the at least one event to accelerate the execution of the at least one event associated with the at least one application running in the electronic device based on the learning ([0038] The configurable co-processor core(s) 150 can be reconfigured to a desired hardware structure by acquiring appropriate build files by for example, downloading from an on-line hardware configuration store as shown in FIG. 2. The hardware configuration store may store all the configuration builds currently available for the configurable co-processor core(s) 150. These builds can be changed (including bug fixes, adding new builds, etc.) similar to the software apps in an app store; Examiner’s Note: The builds (configuration) can be changed.).
Yalamanchili fails to explicitly teach wherein the at least one parameter comprises at least one of a system load, temperature of the electronic device, power consumption, or an internal component temperature.
However, in analogous art, Barsness teaches:
wherein the at least one parameter comprises at least one of a system load, temperature of the electronic device, power consumption, or an internal component temperature ([0036] The user priorities are thresholds or limits that are set by the user or a system administrator to control the resource scheduler's actions to optimize power and performance of the hybrid system. In the illustrated example, the user priorities 410 include a time of day 412, system temperature 414, power consumption versus run time 416 and cost to run 418. 0040] In this case, it is assumed that the application begins execution on a front-end system and runs until hooks for the accelerated portion are encountered. The resource scheduler accesses the user priorities and application characteristics for this application in the application profile to determine whether to accelerate the application. For this example, it is assumed that there is a power consumption versus runtime (416 in FIG. 4) user priority for this application. For this example, we assume this user priority sets up the energy consumption with a higher priority than runtime with a medium strictness; [0041] Another example of the resource scheduler will now be described. In this case, it is also assumed that the application runs until hooks for the accelerated portion are encountered as described above and then accesses the user priorities and application characteristics to determine whether to accelerate the application. For this example, it is assumed that the user priorities include a time of day priority that the application should run on the front-end computer system and not be accelerated on the multi-node computer system where the historical run time on the multi-node computer system is greater than 1 minute and the current time of day is between the hours of 8:00 AM and 5:00 PM; Examiner’s Note: The parameters are the user specific priorities set for factors such as power consumption, runtime, or time of day. The applications are already running until the hook, where acceleration is decided.);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Yalamanchili with the teachings of Barsness wherein the at least one parameter comprises at least one of a system load, temperature of the electronic device, power consumption, or an internal component temperature. Yalamanchili discusses choosing a hardware accelerator from a hardware configuration store to act as a hardware engine in order to reconfigure and optimize cores. The parameter for determining acceleration of applications in Yalamanchili refer to intensity. This intensity can be understood as system load. Barsness discusses when to execute the acceleration of an application based on user priorities, such as time, temperature, and power consumption. It would be beneficial to take into consideration parameters of a system load, temperature of the electronic device, power consumption, or an internal component temperature. These thresholds or limits that are set by the user or a system administrator to control the resource scheduler's actions to optimize power and performance of the hybrid system, as discussed in Barsness ([0036]). In example embodiments, the determination of whether acceleration is need is done as the application is running. The determination is set based on user’s priorities regarding power consumption, runtime, or time of day, as discussed in Barsness ([0040]; [0041]).
15. With regard to claim 4, Barsness further teaches:
further comprising:
detecting, by the electronic device, a trigger for the at least one event to accelerate execution of the at least one event, wherein the trigger is a start of an event from the at least one event ([0040] ...the application begins execution on a front-end system and runs until hooks for the accelerated portion are encountered; Examiner’s Note: The application begins execution until it runs into hooks for the accelerated portion, is an event that is generated by application(s) running on the device that require acceleration. There are multiple applications running on the system.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Yalamanchili with the teachings of Barsness where detecting, by the electronic device, a trigger for each event to accelerate execution of the at least one event, wherein the trigger is a start of an event from the at least one event. Both Yalamanchili and Barsness teach of modifying system configurations in order to accelerate sections/tasks of an application(s) that is running on the system. The hook that indicates the need for acceleration of an application causes the system to then determine whether it is needed or not, as discussed in Barsness ([0040]). If needed, the application is moved to the multi-node computer system instead of remaining on the front-end computer system. This changes system configuration. Therefore, the system is able to recognize when acceleration is necessary.
16. With regard to claim 5, Barsness further teaches:
further comprising:
detecting, by the electronic device, at least one parameter as an input to set an optimal system configuration for accelerated execution of the at least one event in the electronic device, wherein the at least one parameter comprises at least one of a system load, temperature of the electronic device, power consumption, or an internal component temperature ([0036] The user priorities are thresholds or limits that are set by the user or a system administrator to control the resource scheduler's actions to optimize power and performance of the hybrid system. In the illustrated example, the user priorities 410 include a time of day 412, system temperature 414, power consumption versus run time 416 and cost to run 418. 0040] In this case, it is assumed that the application begins execution on a front-end system and runs until hooks for the accelerated portion are encountered. The resource scheduler accesses the user priorities and application characteristics for this application in the application profile to determine whether to accelerate the application. For this example, it is assumed that there is a power consumption versus runtime (416 in FIG. 4) user priority for this application. For this example, we assume this user priority sets up the energy consumption with a higher priority than runtime with a medium strictness; [0041] Another example of the resource scheduler will now be described. In this case, it is also assumed that the application runs until hooks for the accelerated portion are encountered as described above and then accesses the user priorities and application characteristics to determine whether to accelerate the application. For this example, it is assumed that the user priorities include a time of day priority that the application should run on the front-end computer system and not be accelerated on the multi-node computer system where the historical run time on the multi-node computer system is greater than 1 minute and the current time of day is between the hours of 8:00 AM and 5:00 PM; Examiner’s Note: The parameters are the user specific priorities set for factors such as power consumption, runtime, or time of day. The applications are already running until the hook, where acceleration is decided.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Yalamanchili with the teachings of Barsness where detecting, by the electronic device, at least one parameter as an input to set an optimal system configuration for accelerated execution of each event in the electronic device, wherein the at least one parameter comprises at least one of a system load, temperature of the electronic device, power consumption, or an internal component temperature. Yalamanchili discusses choosing a hardware accelerator from a hardware configuration store to act as a hardware engine in order to reconfigure and optimize cores. The methods of optimization can be improved hardware, resource utilization, etc. Barsness discusses when to execute the acceleration of an application based on user priorities, such as time, temperature, and power consumption. It would be beneficial to take into consideration parameters of a system load, temperature of the electronic device, power consumption, or an internal component temperature. These thresholds or limits that are set by the user or a system administrator to control the resource scheduler's actions to optimize power and performance of the hybrid system, as discussed in Barsness ([0036]). In example embodiments, the determination of whether acceleration is need is done as the application is running. The determination is set based on user’s priorities regarding power consumption, runtime, or time of day, as discussed in Barsness ([0040]; [0041]).
17. With regard to claim 6, Barsness teaches:
wherein the resource operating performance configuration comprises at least one of:
a central processing unit (CPU) Operating Performance Point (OPP) ([0020] The compute chip incorporates two processors or central processor units (CPUs) and is mounted on a node daughter card 114.);
a graphics processing unit (GPU) OPP;
a process aware scheduler configuration;
an energy aware scheduler configuration ([0040] For this example, we assume this user priority sets up the energy consumption with a higher priority than runtime with a medium strictness. This means that energy consumption is a higher priority but a moderate amount of additional energy consumption is tolerable. It is also assumed that the historical runtime (428 in FIG. 4) on the multi-node computer system is 50% faster for this application than the historical runtime on the front-end computer system, and the historical power consumption (426 in FIG. 4) for this application is 3% higher on the multi-node computer system than on the front-end computer system. The resource scheduler analyzes this information in the application profile and determines that the power consumption is low (3%) compared to the run time improvement (50%). Thus the resource scheduler would send the application to the multi-node computer system for accelerated execution. In contrast, if the power consumption was higher or the strictness was higher, then the additional power consumption would indicate not to accelerate the application to save power by running the application in a front-end computer system.);
a process thread scheduling configuration; or
a priority scheduling configuration ([0041] Another example of the resource scheduler will now be described. In this case, it is also assumed that the application runs until hooks for the accelerated portion are encountered as described above and then accesses the user priorities and application characteristics to determine whether to accelerate the application.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Yalamanchili with the teachings of Barsness wherein the resource operating performance configuration comprises at least one of: a central processing unit (CPU) Operating Performance Point (OPP); a graphics processing unit (GPU) OPP; a process aware scheduler configuration; an energy aware scheduler configuration; a process thread scheduling configuration; or a priority scheduling configuration. By allowing the resource operating performance configuration to have multiple scheduling options, the system to determine acceleration based on a user’s preferred priority, ([0040]; [0041]; [0042]; [0043]). This allows the user customizable options for application execution. Therefore, allowing an application to execute based on when the user needs or wants it to.
18. With regard to claim 7, Yalamanchili teaches:
A method for handling at least one resource operating performance configuration in an electronic device, the method comprising:
determining, by the electronic device, a nature of a task associated with at least one application among a plurality of applications running in the electronic device and a parameter associated with the at least one application running in the electronic device (Fig. 2, 200 Hardware Configuration Store; [0008] Hardware accelerators may be designed for computationally intensive software code; [0034] The DSP cores 118A and 118B, and the processor cores 120A and 120B of the multi-core CPU 120 support various functions such as video, audio, graphics, gaming, and the like; [0049] In this configuration, a CPU can profile a user's usage in the background and auto-reconfigure the device, from the list of available builds, based on a determined usage pattern of the user. The reconfiguration may be done each time a user switches from one intensive task to another; Examiner’s Note: The nature of the task can be graphics or gaming tasks. These can be seen in the hardware configuration store as different engines. The parameter can be whether or not the application is intense.);
learning, by the electronic device, at least one system configuration for the determined nature of the task associated with the at least one application running in the electronic device and the determined parameter associated with the at least one application running in the electronic device (Fig. 2, 200 Hardware Configuration Store; [0008] Hardware accelerators may be designed for computationally intensive software code; [0034] The DSP cores 118A and 118B, and the processor cores 120A and 120B of the multi-core CPU 120 support various functions such as video, audio, graphics, gaming, and the like; [0049] In this configuration, a CPU can profile a user's usage in the background and auto-reconfigure the device, from the list of available builds, based on a determined usage pattern of the user. The reconfiguration may be done each time a user switches from one intensive task to another; [0054] In the auto-reconfiguration mode, the user is not involved in the build selection process. Instead, the multi-core CPU 120 of wireless mobile device 100 learns the usage pattern of the end user and intelligently selects a build file of a corresponding hardware accelerator available from the on-line hardware configuration store. Once the build file is downloaded, the hardware controller 110 may automatically reconfigure the configurable co-processor core(s) 150 for operation based on the use-case(s)/usage-pattern. This auto-reconfiguration can take place in the background to reduce/hide lag/delays associated with the configuration changes; Examiner’s Note: The nature of the task can be graphics or gaming tasks. These can be seen in the hardware configuration store as different engines. The parameter can be whether or not the application is intense. Auto-configuration is self-learning.);
allocating, by the electronic device, the at least one resource operating performance configuration for the at least one application running in the electronic device based on the learning ([0055] In particular, the auto-reconfiguration mode may adaptively adjust/change the hardware in the device to work better (or even optimally) for a user's changing usage pattern. Additionally, the auto-reconfiguration mode makes continuous/improved use of the reconfigurable fabric in the wireless mobile device 100. Because, the auto-reconfiguration mode detects the hardware accelerator that is specified for efficient operation of the wireless mobile device and configures the reconfigurable fabric in advance, configuration latency associated with reconfiguring medium or large cores is hidden from the user. The auto-reconfiguration mode may provide improved power, performance and resource utilization by learning a user's usage pattern. The auto-reconfiguration mode may dynamically improve (or even optimize) the device for each individual user and for changing usage patterns of each user; Examiner’s Note: The auto-reconfiguration mode can provide improved resource utilization.); and
accelerating, by the electronic device, execution of the at least one event associated with the at least one application running in the electronic device based on the at least one allocated resource operating performance configuration ([0008] Such hardware acceleration employs hardware to perform some function faster than is possible in software running on a normal (general-purpose) CPU.),
Yalamanchili fails to explicitly teach wherein the at least one system configuration is obtained through a neural network model trained to obtain information regarding the at least one system configuration optimized for acceleration for each of a plurality of events generated by the plurality of applications.
However, in analogous art, Aseev teaches:
wherein the at least one system configuration is obtained through a neural network model trained to obtain information regarding the at least one system configuration optimized for acceleration for each of a plurality of events generated by the plurality of applications ([0045] The device and user information received by data collection module 122 is then stored as files 116 in device information database 114 where it can be classified based on different criteria at step 310. Next, the collected device and user data can be accessed by the analysis module 124, in which the further data processing can be performed at step 315. Based on the analysis of the collected device and user data, the analysis module 124 can automatically build one or more usage patterns 128 or a set of different patterns. For example, in an exemplary aspect, the analysis module 124 can develop a prediction model using the information collected by the data collection module 122 discussed above. In this aspect, the prediction model is trained by a specific of machine learning algorithm (e.g., random forest, SVM, neural network), where the training set is based on the collected information. In one aspect, if a new user is identified, the device information and user information of the new user will be provided as an input to the machine learning algorithm, which will predict the suitable software or system configurations for the user based on the prediction model. In one aspect, there are two stages to predict the configuration. In the first stage, basic configuration is created during the installation on the new device based on the usage patterns and characteristics that are immediately available such as CPU/RAM usage and amount, network throughput and the like. In the second stage, the configuration information is updated (i.e., tuned) according to a predetermined frequency based on new usage data collected and then the machine learning prediction model is retrained based on the new information to predict suitable configurations.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Yalamanchili with the teachings of Aseev wherein the at least one system configuration is obtained through a neural network model trained to obtain information regarding the at least one system configuration optimized for acceleration for each of a plurality of events generated by the plurality of applications. Similarly to Yalamanchili, Aseev teaches of automatically configuring and adjusting computer systems based on data collection and analysis to create new usage patterns. The method then compares device and user information of the computing device with the usage patterns associated with the existing devices to identify an optimal configuration for the computing device. The optimal configuration is used to generate installation instructions and configurations. Finally, the system settings of the computing device are automatically configured based on the instructions and configurations (Abstract). The prediction model is trained by a machine learning algorithm, which can be a neural network model. By automatically configuring configurations based on training a neural network model using user usage patterns, this process helps optimize software installation and configuration setting that meet a device user's specific needs and desired settings and functionality, as discussed in Aseev ([0005]).
19. With regard to claim 8, Barsness further teaches:
further comprising:
prioritizing, by the electronic device, the allocation of the at least one resource operating performance configuration to at least one application from the plurality of applications based on at least one of at least one application performance or at least one system state ([0026] ...application programs running on adjacent nodes; [0040] An example of the resource scheduler determining where to accelerate a section of an application based on an application profile will now be described. In this case, it is assumed that the application begins execution on a front-end system and runs until hooks for the accelerated portion are encountered. The resource scheduler accesses the user priorities and application characteristics for this application in the application profile to determine whether to accelerate the application. For this example, it is assumed that there is a power consumption versus runtime (416 in FIG. 4) user priority for this application. For this example, we assume this user priority sets up the energy consumption with a higher priority than runtime with a medium strictness; Examiner’s Note: In this example energy consumption is higher priority.); and
allocating, by the electronic device, at least one resource operating performance configuration for at least one application running in the electronic device based on the priority ([0026] ...application programs running on adjacent nodes; [0040] An example of the resource scheduler determining where to accelerate a section of an application based on an application profile will now be described. In this case, it is assumed that the application begins execution on a front-end system and runs until hooks for the accelerated portion are encountered. The resource scheduler accesses the user priorities and application characteristics for this application in the application profile to determine whether to accelerate the application. For this example, it is assumed that there is a power consumption versus runtime (416 in FIG. 4) user priority for this application. For this example, we assume this user priority sets up the energy consumption with a higher priority than runtime with a medium strictness. This means that energy consumption is a higher priority but a moderate amount of additional energy consumption is tolerable. It is also assumed that the historical runtime (428 in FIG. 4) on the multi-node computer system is 50% faster for this application than the historical runtime on the front-end computer system, and the historical power consumption (426 in FIG. 4) for this application is 3% higher on the multi-node computer system than on the front-end computer system. The resource scheduler analyzes this information in the application profile and determines that the power consumption is low (3%) compared to the run time improvement (50%). Thus the resource scheduler would send the application to the multi-node computer system for accelerated execution. In contrast, if the power consumption was higher or the strictness was higher, then the additional power consumption would indicate not to accelerate the application to save power by running the application in a front-end computer system; Examiner’s Note: The application gets sent to the multi-node computer system for accelerated execution based on its energy consumption priority).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Yalamanchili with the teachings of Barsness where prioritizing, by the electronic device, the allocation of the at least one resource operating performance configuration to at least one application from the plurality of applications based on at least one of at least one application performance or at least one system state; and allocating, by the electronic device, at least one resource operating performance configuration for at least one application running in the electronic device based on the priority. Barsness discusses the systems’ ability to determine acceleration based on a user’s preferred priority, ([0040]). This allows the user customizable options for application execution. Therefore, allowing an application to execute based on when the user needs or wants it to.
20. With regard to claim 9, Barsness further teaches:
wherein the determining, by the electronic device, of the nature of the task associated with the at least one application running in the electronic device and the parameter associated with the at least one application running in the electronic device comprises:
determining, by the electronic device, at least one of at least one key module to be accelerated, a nature of the at least one key module, or a time-duration for which acceleration is to be done at the at least one key module ([0026] The application 222 is a user software application, process or job that is loaded on the node by the control system to perform a designated task; [0036] The user priorities are thresholds or limits that are set by the user or a system administrator to control the resource scheduler's actions to optimize power and performance of the hybrid system. In the illustrated example, the user priorities 410 include a time of day 412, system temperature 414, power consumption versus run time 416 and cost to run 418. 0040] In this case, it is assumed that the application begins execution on a front-end system and runs until hooks for the accelerated portion are encountered. The resource scheduler accesses the user priorities and application characteristics for this application in the application profile to determine whether to accelerate the application. For this example, it is assumed that there is a power consumption versus runtime (416 in FIG. 4) user priority for this application. For this example, we assume this user priority sets up the energy consumption with a higher priority than runtime with a medium strictness; [0041] Another example of the resource scheduler will now be described. In this case, it is also assumed that the application runs until hooks for the accelerated portion are encountered as described above and then accesses the user priorities and application characteristics to determine whether to accelerate the application. For this example, it is assumed that the user priorities include a time of day priority that the application should run on the front-end computer system and not be accelerated on the multi-node computer system where the historical run time on the multi-node computer system is greater than 1 minute and the current time of day is between the hours of 8:00 AM and 5:00 PM; Examiner’s Note: Things that can be accelerated include time, temperature, power consumption, and cost. These are the modules. The process or jobs on the node for a designated task are the nature of the task. The applications are already running until the hook, where acceleration is decided.); and
determining, by the electronic device, the nature of the task associated with the at least one application running in the electronic device and the parameter associated with at least one application running in the electronic device based on the determination ([0026] The application 222 is a user software application, process or job that is loaded on the node by the control system to perform a designated task; [0036] The user priorities are thresholds or limits that are set by the user or a system administrator to control the resource scheduler's actions to optimize power and performance of the hybrid system. In the illustrated example, the user priorities 410 include a time of day 412, system temperature 414, power consumption versus run time 416 and cost to run 418. 0040] In this case, it is assumed that the application begins execution on a front-end system and runs until hooks for the accelerated portion are encountered. The resource scheduler accesses the user priorities and application characteristics for this application in the application profile to determine whether to accelerate the application. For this example, it is assumed that there is a power consumption versus runtime (416 in FIG. 4) user priority for this application. For this example, we assume this user priority sets up the energy consumption with a higher priority than runtime with a medium strictness; [0041] Another example of the resource scheduler will now be described. In this case, it is also assumed that the application runs until hooks for the accelerated portion are encountered as described above and then accesses the user priorities and application characteristics to determine whether to accelerate the application. For this example, it is assumed that the user priorities include a time of day priority that the application should run on the front-end computer system and not be accelerated on the multi-node computer system where the historical run time on the multi-node computer system is greater than 1 minute and the current time of day is between the hours of 8:00 AM and 5:00 PM; Things that can be accelerated include time, temperature, power consumption, and cost. These are the modules. The process or jobs on the node for a designated task are the nature of the task. The parameters are the thresholds.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Yalamanchili with the teachings of Barsness where determining, by the electronic device, a nature of a task associated with at least one application among a plurality of applications running in the electronic device and a parameter associated with the at least one application running in the electronic device; learning, by the electronic device, at least one system configuration for the determined nature of the task associated with the at least one application running in the electronic device and the determined parameter associated with the at least one application running in the electronic device; allocating, by the electronic device, the at least one resource operating performance configuration for the at least one application running in the electronic device based on the learning; and accelerating, by the electronic device, execution of the application running in the electronic device based on the at least one allocated resource operating performance configuration. Yalamanchili discusses choosing a hardware accelerator from a hardware configuration store to act as a hardware engine in order to reconfigure and optimize cores. The methods of optimization can be improved hardware, resource utilization, etc. Barsness discusses when to execute the acceleration of an application based on user priorities, such as time, temperature, and power consumption. It would be beneficial to take into consideration parameters of a system load, temperature of the electronic device, power consumption, or an internal component temperature. These thresholds or limits that are set by the user or a system administrator to control the resource scheduler's actions to optimize power and performance of the hybrid system, as discussed in Barsness ([0036]). In example embodiments, the determination of whether acceleration is need is done as the application is running. The determination is set based on user’s priorities regarding power consumption, runtime, or time of day, as discussed in Barsness ([0040]; [0041]). These are the modules and parameters. Lastly, Yalamanchili teaches the specific nature of the applications, include use for graphics or gaming. Barsness teaches of tasks or jobs that have designated tasks. They both teach of task specific applications.
21. With regard to claim 10, Yalamanchili further teaches:
wherein the learning, by the electronic device, of the at least one system configuration comprises:
learning, by the electronic device, the at least one system configuration for a different nature of at least one task associated with at least one application running in the electronic device and different parameters associated with the at least one application running in the electronic device over a period time ([0053] For example, as shown in FIG. 1, the hardware controller 110 may reconfigure the configurable co-processor core(s) 150 to operate as a gaming engine 350 (FIG. 3), a video processing engine 450 (FIG. 4), a low power image processing engine 550 (FIG. 5), a high power image processing engine 650 (FIG. 6), or other like hardware accelerator; [0054] In the auto-reconfiguration mode, the user is not involved in the build selection process. Instead, the multi-core CPU 120 of wireless mobile device 100 learns the usage pattern of the end user and intelligently selects a build file of a corresponding hardware accelerator available from the on-line hardware configuration store. Once the build file is downloaded, the hardware controller 110 may automatically reconfigure the configurable co-processor core(s) 150 for operation based on the use-case(s)/usage-pattern. This auto-reconfiguration can take place in the background to reduce/hide lag/delays associated with the configuration changes.); and
storing, by the electronic device, the learning in a memory ([0040] FIG. 2 is a block diagram of an on-line hardware configuration store 200, according to a further aspect of the disclosure. In this configuration, the user can reconfigure a configurable co-processor core(s) 150 (FIG. 1) to operate as hardware of interest. This can be done through the on-line hardware configuration store 200, in a manner similar to an app-store. Representatively, the on-line hardware configuration store 200 includes build files for reconfiguring the configurable co-processor core(s) 150 to operate as a gaming engine 260, a video processing engine 270, a low power image processing engine 280, a high performance image processing engine 290, or other like hardware accelerator; Examiner’s Note: The configurations are stored in the configuration store.).
22. With regard to claim 14, Barsness further teaches:
wherein the allocating, by the electronic device, of the at least one resource operating performance configuration for the at least one application running in the electronic device comprises:
detecting, by the electronic device, a start of at least one event associated with the at least one application ([0026] ...application programs running on adjacent nodes; [0040] An example of the resource scheduler determining where to accelerate a section of an application based on an application profile will now be described. In this case, it is assumed that the application begins execution on a front-end system and runs until hooks for the accelerated portion are encountered; Examiner’s Note: The application begins execution until it runs into hooks for the accelerated portion, is an event that is generated by application(s) running on the device that require acceleration. There are multiple applications running on the system in parallel on adjacent nodes.);
monitoring, by the electronic device, the at least one started event associated with the at least one application ([0039] The heart beat mechanism is used to monitor the progress of the application to update the application profile upon completion of the application. The resource scheduler also monitors the performance of the application and changes the location of acceleration dynamically if needed by moving the application to a more appropriate platform. Upon completion of the application, the gathered historical information is saved in the application profile.);
selecting and applying at least one system configuration from a plurality of system configurations upon at least one started event associated with the at least one application ([0040] An example of the resource scheduler determining where to accelerate a section of an application based on an application profile will now be described. In this case, it is assumed that the application begins execution on a front-end system and runs until hooks for the accelerated portion are encountered.);
monitoring, by the electronic device, a key performance indicator (KPI) for the at least one system configuration from the plurality of system configurations ([0040] The resource scheduler accesses the user priorities and application characteristics for this application in the application profile to determine whether to accelerate the application. For this example, it is assumed that there is a power consumption versus runtime (416 in FIG. 4) user priority for this application. For this example, we assume this user priority sets up the energy consumption with a higher priority than runtime with a medium strictness.); and
allocating, by the electronic device, the at least one resource operating performance configuration for the at least one application running in the electronic device based on the at least one system configuration and the KPI ([0040] It is also assumed that the historical runtime (428 in FIG. 4) on the multi-node computer system is 50% faster for this application than the historical runtime on the front-end computer system, and the historical power consumption (426 in FIG. 4) for this application is 3% higher on the multi-node computer system than on the front-end computer system. The resource scheduler analyzes this information in the application profile and determines that the power consumption is low (3%) compared to the run time improvement (50%). Thus the resource scheduler would send the application to the multi-node computer system for accelerated execution. In contrast, if the power consumption was higher or the strictness was higher, then the additional power consumption would indicate not to accelerate the application to save power by running the application in a front-end computer system.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Yalamanchili with the teachings of Barsness herein the allocating, by the electronic device, of the at least one resource operating performance configuration for the at least one application running in the electronic device comprises: detecting, by the electronic device, a start of at least one event associated with the at least one application; monitoring, by the electronic device, the at least one started event associated with the at least one application; selecting and applying at least one system configuration from a plurality of system configurations upon at least one started event associated with the at least one application; monitoring, by the electronic device, a key performance indicator (KPI) for the at least one system configuration from the plurality of system configurations; and allocating, by the electronic device, the at least one resource operating performance configuration for the at least one application running in the electronic device based on the at least one system configuration and the KPI. Barsness discusses the systems’ ability to determine acceleration based on a user’s preferred priority, ([0040]). The user’s preferred priority act as KPI to determine whether or not to accelerate a task. This allows the user customizable options for application execution. Therefore, allowing an application to execute based on when the user needs or wants it to.
23. Regarding claim 15, it is rejected under the same reasoning as claim 6 above. Therefore, it is rejected under the same rationale.
24. With regard to claim 16, Yalamanchili further teaches:
A method for handling at least one resource operating performance configuration in an electronic device, comprising:
detecting, by the electronic device, a start of at least one event associated with at least one application among a plurality of applications ([0008] In some architectures, special-purpose processors that are often referred to as "accelerators" are implemented to perform certain types of operations. For example, a processor executing a program may offload certain types of operations to an accelerator that is configured to perform those types of operations efficiently. Such hardware acceleration employs hardware to perform some function faster than is possible in software running on a normal (general-purpose) CPU. Hardware accelerators may be designed for computationally intensive software code. Depending upon granularity, hardware acceleration can vary from small functional units to large functional blocks. Examples of such hardware acceleration include blitting acceleration functionality in graphics processing units (GPUs) and instructions for complex operations in CPUs; [0049] In this configuration, a CPU can profile a user's usage in the background and auto-reconfigure the device, from the list of available builds, based on a determined usage pattern of the user. The reconfiguration may be done each time a user switches from one intensive task to another. To prevent large amounts of any associated reconfiguration overhead, the algorithm can adjust itself to reconfigure after a duration of time or usage, which may be predetermined. Reconfiguration decisions can be dependent on the type of configurable blocks and recommendations on reconfiguration; Examiner’s Note: Although “events” are not specifically mentioned, there are instances of intensive computations which require acceleration. These instances are similar to the events. The algorithm is able to adjust itself to reconfigure after a duration of time or usage.);
detecting, by the electronic device, at least one parameter as an input to set an optimal system configuration for accelerating execution of each event from the at least one event in the electronic device, wherein the at least one parameter comprises at least one of a system load, a temperature of the electronic device, a power consumption, or an internal component temperature (Fig. 2, 200 Hardware Configuration Store; [0008] In some architectures, special-purpose processors that are often referred to as "accelerators" are implemented to perform certain types of operations. For example, a processor executing a program may offload certain types of operations to an accelerator that is configured to perform those types of operations efficiently. Such hardware acceleration employs hardware to perform some function faster than is possible in software running on a normal (general-purpose) CPU. Hardware accelerators may be designed for computationally intensive software code. Depending upon granularity, hardware acceleration can vary from small functional units to large functional blocks. Examples of such hardware acceleration include blitting acceleration functionality in graphics processing units (GPUs) and instructions for complex operations in CPUs; [0034] The DSP cores 118A and 118B, and the processor cores 120A and 120B of the multi-core CPU 120 support various functions such as video, audio, graphics, gaming, and the like; [0049] In this configuration, a CPU can profile a user's usage in the background and auto-reconfigure the device, from the list of available builds, based on a determined usage pattern of the user. The reconfiguration may be done each time a user switches from one intensive task to another; Examiner’s Note: The nature of the task can be graphics or gaming tasks. These can be seen in the hardware configuration store as different engines. The parameter can be whether or not the application is intense, and acceleration is done based on this.);
acquiring, by the electronic device, a plurality of previously saved system configurations based on at least one detected parameter ([0015] According to another aspect of the present disclosure, a method for reconfiguration of co-processor cores for general purpose processors is described. The method includes selecting a hardware accelerator from an on-line hardware configuration store; Examiner’s Note: The configurations in the on-line hardware configuration store are analogous with saved system configurations.);
identifying and applying, by the electronic device, an optimal configuration value from the plurality of previously saved system configurations to allocate of at least one optimal resource operating performance configuration for at least one application running in the electronic device ([0055] In particular, the auto-reconfiguration mode may adaptively adjust/change the hardware in the device to work better (or even optimally) for a user's changing usage pattern. Additionally, the auto-reconfiguration mode makes continuous/improved use of the reconfigurable fabric in the wireless mobile device 100. Because, the auto-reconfiguration mode detects the hardware accelerator that is specified for efficient operation of the wireless mobile device and configures the reconfigurable fabric in advance, configuration latency associated with reconfiguring medium or large cores is hidden from the user. The auto-reconfiguration mode may provide improved power, performance and resource utilization by learning a user's usage pattern. The auto-reconfiguration mode may dynamically improve (or even optimize) the device for each individual user and for changing usage patterns of each user; Examiner’s Note: The auto-reconfiguration mode can provide improved resource utilization.); and
accelerating, by the electronic device, execution of the at least one event associated with the at least application running in the electronic device based on the at least one optimal resource operating performance configuration ([0008] Such hardware acceleration employs hardware to perform some function faster than is possible in software running on a normal (general-purpose) CPU.),
Yalamanchili fails to explicitly teach a plurality of applications and wherein the at least one parameter comprises at least one of a system load, a temperature of the electronic device, a power consumption, or an internal component temperature.
However, in analogous art, Barsness teaches:
a plurality of applications ([0026] ...application programs running on adjacent nodes.);
wherein the at least one parameter comprises at least one of a system load, a temperature of the electronic device, a power consumption, or an internal component temperature ([0036] The user priorities are thresholds or limits that are set by the user or a system administrator to control the resource scheduler's actions to optimize power and performance of the hybrid system. In the illustrated example, the user priorities 410 include a time of day 412, system temperature 414, power consumption versus run time 416 and cost to run 418. 0040] In this case, it is assumed that the application begins execution on a front-end system and runs until hooks for the accelerated portion are encountered. The resource scheduler accesses the user priorities and application characteristics for this application in the application profile to determine whether to accelerate the application. For this example, it is assumed that there is a power consumption versus runtime (416 in FIG. 4) user priority for this application. For this example, we assume this user priority sets up the energy consumption with a higher priority than runtime with a medium strictness; [0041] Another example of the resource scheduler will now be described. In this case, it is also assumed that the application runs until hooks for the accelerated portion are encountered as described above and then accesses the user priorities and application characteristics to determine whether to accelerate the application. For this example, it is assumed that the user priorities include a time of day priority that the application should run on the front-end computer system and not be accelerated on the multi-node computer system where the historical run time on the multi-node computer system is greater than 1 minute and the current time of day is between the hours of 8:00 AM and 5:00 PM; Examiner’s Note: The parameters are the user specific priorities set for factors such as power consumption, runtime, or time of day. The applications are already running until the hook, where acceleration is decided.);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Yalamanchili with the teachings of Barsness a plurality of applications and wherein the at least one parameter comprises at least one of a system load, a temperature of the electronic device, a power consumption, or an internal component temperature. Yalamanchili discusses choosing a hardware accelerator from a hardware configuration store to act as a hardware engine in order to reconfigure and optimize cores. The methods of optimization can be improved hardware, resource utilization, etc. Barsness discusses when to execute the acceleration of an application based on user priorities, such as time, temperature, and power consumption. It would be beneficial to take into consideration parameters of a system load, temperature of the electronic device, power consumption, or an internal component temperature. These thresholds or limits that are set by the user or a system administrator to control the resource scheduler's actions to optimize power and performance of the hybrid system, as discussed in Barsness ([0036]). In example embodiments, the determination of whether acceleration is need is done as the application is running. The determination is set based on user’s priorities regarding power consumption, runtime, or time of day, as discussed in Barsness ([0040]; [0041]).
Although Yalamanchili teaches of automated reconfiguration of the system configuration and selecting a hardware accelerator according to user usage patterns without needing a user to select a configuration, both Yalamanchili and Barsness fail to explicitly teach that the modifying is done wherein the optimal configuration value is obtained through a neural network model trained to obtain information regarding a system configuration optimized for acceleration for each of a plurality of events generated by the plurality of applications.
However, in analogous art, Aseev teaches:
wherein the optimal configuration value is obtained through a neural network model trained to obtain information regarding a system configuration optimized for acceleration for each of a plurality of events generated by the plurality of applications ([0045] The device and user information received by data collection module 122 is then stored as files 116 in device information database 114 where it can be classified based on different criteria at step 310. Next, the collected device and user data can be accessed by the analysis module 124, in which the further data processing can be performed at step 315. Based on the analysis of the collected device and user data, the analysis module 124 can automatically build one or more usage patterns 128 or a set of different patterns. For example, in an exemplary aspect, the analysis module 124 can develop a prediction model using the information collected by the data collection module 122 discussed above. In this aspect, the prediction model is trained by a specific of machine learning algorithm (e.g., random forest, SVM, neural network), where the training set is based on the collected information. In one aspect, if a new user is identified, the device information and user information of the new user will be provided as an input to the machine learning algorithm, which will predict the suitable software or system configurations for the user based on the prediction model. In one aspect, there are two stages to predict the configuration. In the first stage, basic configuration is created during the installation on the new device based on the usage patterns and characteristics that are immediately available such as CPU/RAM usage and amount, network throughput and the like. In the second stage, the configuration information is updated (i.e., tuned) according to a predetermined frequency based on new usage data collected and then the machine learning prediction model is retrained based on the new information to predict suitable configurations.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Yalamanchili and Barsness with the teachings of Aseev wherein the optimal configuration value is obtained through a neural network model trained to obtain information regarding a system configuration optimized for acceleration for each of a plurality of events generated by the plurality of applications. Similarly to Yalamanchili and Barsness, Aseev teaches of automatically configuring and adjusting computer systems based on data collection and analysis to create new usage patterns. The method then compares device and user information of the computing device with the usage patterns associated with the existing devices to identify an optimal configuration for the computing device. The optimal configuration is used to generate installation instructions and configurations. Finally, the system settings of the computing device are automatically configured based on the instructions and configurations (Abstract). The prediction model is trained by a machine learning algorithm, which can be a neural network model. By automatically configuring configurations based on training a neural network model using user usage patterns, this process helps optimize software installation and configuration setting that meet a device user's specific needs and desired settings and functionality, as discussed in Aseev ([0005]).
25. With regard to claim 17, Yalamanchili further teaches:
further comprising:
detecting, by the electronic device, at least one subsequent event from at least one application, wherein the at least one subsequent event corresponds to a logical intermediate stage of an operation of the at least one application ([0049] In this configuration, a CPU can profile a user's usage in the background and auto-reconfigure the device, from the list of available builds, based on a determined usage pattern of the user. The reconfiguration may be done each time a user switches from one intensive task to another. To prevent large amounts of any associated reconfiguration overhead, the algorithm can adjust itself to reconfigure after a duration of time or usage, which may be predetermined. Reconfiguration decisions can be dependent on the type of configurable blocks and recommendations on reconfiguration.; Examiner’s Note: The intermediate stage can be when the user switches from one intensive task to another. The algorithm can reconfigure after a duration of time or usage, which can also be an intermediate stage of operation.); and
identifying and applying, by the electronic device, the optimal configuration value from the plurality of previously saved system configurations to allocate of at least one optimal resource operating performance configuration for at least one application running in the electronic device based on the at least one detected subsequent event ([0050] FIG. 8 is a process flow diagram showing a method 800 for automated reconfiguration of a configurable core according to an illustrative aspect of the present disclosure. At block 810, a user's usage of a wireless mobile device is profiled to determine a usage pattern of the user. Once the usage pattern of the user is determined, at block 812, it is determined whether to reconfigure a configurable core of the wireless mobile device; [0051] For example, as shown in FIG. 1, a multi-core CPU 120 may periodically collect information on the type of hardware accelerators available from the on-line hardware configuration store 200; Examiner’s Note: The configuration store has previously saved system configurations. Based on user’s usage pattern, automatic reconfiguration and acceleration is completed.).
26. Regarding claim 18, it is rejected under the same reasoning as claim 8 above. Therefore, it is rejected under the same rationale.
27. Regarding claim 19, it is rejected under the same reasoning as claim 8 above. Therefore, it is rejected under the same rationale.
28. Claims 11-13 are rejected under 35 USC 103 as being unpatentable over Yalamanchili US 20140281472 A1; Barsness et al. US 20130159745 A1; and Aseev et al. US 20180232245 A1, as applied in claim 1, in further view of Subramanian et al. US 20200104184 A1.
29. With regard to claim 11, Yalamanchili and Aseev teach the method of claim 7 but fail to explicitly teach wherein the learning, by the electronic device, of the at least one system configuration comprises: detecting, by the electronic device, a start of at least one event associated with the at least one application; sharing, by the electronic device, information associated with the at least one event to a processor; evaluating, by the electronic device, optimal setting against various stages of the at least one event and a system load associated with the at least one application; storing, by the electronic device, timestamp information, key performance indicator (KPI) information against various stages of the at least one event and tuneable decisions by the processor against the at least one event; detecting, by the electronic device, an end of at least one event associated with the at least one application; sending, by the electronic device, KPI versus timestamp information corresponding to at least one event to a memory; evaluating, by the electronic device, a performance of the KPI corresponding to at least one event to determine whether the KPI is met with a predefined value; performing one of computing a negative reward value upon determining the KPI is not met with the predefined value or computing a positive reward value upon determining the KPI is met with the predefined value; sharing one of the positive reward value or the negative reward value to the processor; and storing learning of tuneable decision and one of the positive reward value or the negative reward value corresponding to a system load context information associated with the at least one application in the memory.
However, Barsness further teaches:
wherein the learning, by the electronic device, of the at least one system configuration comprises:
detecting, by the electronic device, a start of at least one event associated with the at least one application ([0026] ...application programs running on adjacent nodes; [0040] An example of the resource scheduler determining where to accelerate a section of an application based on an application profile will now be described. In this case, it is assumed that the application begins execution on a front-end system and runs until hooks for the accelerated portion are encountered; Examiner’s Note: The application begins execution until it runs into hooks for the accelerated portion, is an event that is generated by application(s) running on the device that require acceleration. There are multiple applications running on the system in parallel on adjacent nodes.);
sharing, by the electronic device, information associated with the at least one event to a processor ([0038] A heart beat mechanism 144 (FIG. 1) is used to gather historical information stored in the application profile. The heart beat mechanism 144 preferably would run continuously to gather the historical network utilization, historical power consumption and historical runtime of each application running on each platform and then store this information in the application profile 145.);
evaluating, by the electronic device, optimal setting against various stages of the at least one event and a system load associated with the at least one application ([0040] The resource scheduler accesses the user priorities and application characteristics for this application in the application profile to determine whether to accelerate the application.);
storing, by the electronic device, timestamp information, key performance indicator (KPI) information against various stages of the at least one event and tuneable decisions by the processor against the at least one event ([0036] FIG. 4 shows a block diagram that represents the type of data that is stored in an application profile to facilitate optimization of efficiency and power consumption in a hybrid computer system. The application profile 145 preferably includes user priorities 410 and application characteristics 420. The user priorities are thresholds or limits that are set by the user or a system administrator to control the resource scheduler's actions to optimize power and performance of the hybrid system. In the illustrated example, the user priorities 410 include a time of day 412, system temperature 414, power consumption versus run time 416 and cost to run 418. The time of day 412, gives preferences for optimization depending on the time of day to allow the user to control utilization of resources at peak times; [0037] There may be application characteristics 420 stored for multiple applications on multiple platforms; [0038] A heart beat mechanism 144 (FIG. 1) is used to gather historical information stored in the application profile. The heart beat mechanism 144 preferably would run continuously to gather the historical network utilization, historical power consumption and historical runtime of each application running on each platform and then store this information in the application profile 145. The resource scheduler can determine where to execute the application based on the application profile when a portion of the application is to be accelerated or prior to beginning the application; [0040] It is also assumed that the historical runtime (428 in FIG. 4) on the multi-node computer system is 50% faster for this application than the historical runtime on the front-end computer system, and the historical power consumption (426 in FIG. 4) for this application is 3% higher on the multi-node computer system than on the front-end computer system. The resource scheduler analyzes this information in the application profile and determines that the power consumption is low (3%) compared to the run time improvement (50%). Thus the resource scheduler would send the application to the multi-node computer system for accelerated execution; [0041] For this example, it is assumed that the user priorities include a time of day priority that the application should run on the front-end computer system and not be accelerated on the multi-node computer system where the historical run time on the multi-node computer system is greater than 1 minute and the current time of day is between the hours of 8:00 AM and 5:00 PM. It is also assumed that the historical runtime for this application on the multi-node computer system is greater than 1 minute and the time of day is 11:00 AM; Examiner’s Note: The percentages are KPI. The thresholds/limits are tunable decisions. Time information is kept.);
detecting, by the electronic device, an end of at least one event associated with the at least one application ([0046] During the execution of the application, gather historical information about the power and efficiency of the execution and upon completion save the gathered historical information in the application profile (step 660). The method is then done.);
sending, by the electronic device, KPI versus timestamp information corresponding to at least one event to a memory ([0046] During the execution of the application, gather historical information about the power and efficiency of the execution and upon completion save the gathered historical information in the application profile (step 660). The method is then done.);
evaluating, by the electronic device, a performance of the KPI corresponding to at least one event to determine whether the KPI is met with a predefined value ([0043] In another example, it is assumed that the user priorities include a cost to run priority (418 in FIG. 4) that the application should be accelerated on a multi-node computer system when the cost is within a threshold. The cost to run may be determined by a combination of the historical power consumption 426, historical runtime 428 and could include other criteria such as the current time of day. The resource scheduler analyzes this information in the application profile and determines the current cost to run the application is above the threshold in the cost to run user priority 418. Thus the resource scheduler would not send the application to the multi-node computer system for accelerated execution.);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Yamanchili with the teachings of Barsness wherein the learning, by the electronic device, of the at least one system configuration comprises: detecting, by the electronic device, a start of at least one event associated with the at least one application; sharing, by the electronic device, information associated with the at least one event to a processor; evaluating, by the electronic device, optimal setting against various stages of the at least one event and a system load associated with the at least one application; storing, by the electronic device, timestamp information, key performance indicator (KPI) information against various stages of the at least one event and tuneable decisions by the processor against the at least one event; detecting, by the electronic device, an end of at least one event associated with the at least one application; sending, by the electronic device, KPI versus timestamp information corresponding to at least one event to a memory; evaluating, by the electronic device, a performance of the KPI corresponding to at least one event to determine whether the KPI is met with a predefined value. Barsness discusses the systems’ ability to determine acceleration based on a user’s preferred priority, ([0040]). The user’s preferred priority act as KPI to determine whether or not to accelerate a task. This allows the user customizable options for application execution. Therefore, allowing an application to execute based on when the user needs or wants it to.
Barsness fails to explicitly teach performing one of computing a negative reward value upon determining the KPI is not met with the predefined value or computing a positive reward value upon determining the KPI is met with the predefined value; sharing one of the positive reward value or the negative reward value to the processor; and storing learning of tuneable decision and one of the positive reward value or the negative reward value corresponding to a system load context information associated with the at least one application in the memory.
However, in analogous art, Subramanian teaches:
performing one of computing a negative reward value upon determining the KPI is not met with the predefined value or computing a positive reward value upon determining the KPI is met with the predefined value ([0012] For example, boundedness for each category of computing resource can be represented as a percentage or boundedness can be represented as a score. The boundedness and key performance indicator (KPI) of each workload can be calculated by the accelerator or other device. For example, the KPI can be considered to be end-to-end latency of a workload (e.g., time from a workload request to completion of the workload by a returned response or result) but can vary depending on the user's requirements; [0013] In some examples, the pod resource manager can leverage AI or machine learning (ML) to determine a computing resource allocation to allocate for a workload request. A reinforcement learning scheme can be used to generate resource allocation suggestions based on positive or negative rewards from prior resource allocation suggestions. When a resource allocation suggestion is applied, its associated KPI is compared to the previous workload run's performance (e.g., KPI) and a reward is calculated and accumulated in a workload table; [0030] In some examples, pod manager 510 can specify target KPIs or other requirements for the workload request that accelerator 520 will use in place of or in addition to requirements specified by its SLA. Such parameters can be transferred to accelerator 520 as part of the request for resource configuration. [0066] A reward value can be positive for satisfaction of the SLA requirement(s) and one or more of: lower data center utilization by the workload (e.g., higher capacity to handle other workloads), satisfaction of performance requirements, steadiness or reduction in end-to-end latency, and/or steadiness or reduction in boundedness. A reward value can be lower (or negative) for one or more of: failure to meet requirements of the SLA, failure to meet performance requirement(s), higher data center utilization by the workload (e.g., higher capacity to handle other workloads), an increase in end-to-end latency, and/or an increase in boundedness.);
sharing one of the positive reward value or the negative reward value to the processor ([0035] Accelerator 520 can receive resource configuration requests from pod manager 510 and provide the parameters (e.g., workload identifier, SLA requirement(s), required resource configuration or KPI, and so forth) of the request to resource determination module 522.); and
storing learning of tuneable decision and one of the positive reward value or the negative reward value corresponding to a system load context information associated with the at least one application in the memory ([0013] When a resource allocation suggestion is applied, its associated KPI is compared to the previous workload run's performance (e.g., KPI) and a reward is calculated and accumulated in a workload table; [0025] Accelerator 220 provides to AI model 222 the workload parameters from pod manager 210 and also information related to the workload from workload table 224. Workload table 224 keeps track of previously performed workloads and their characteristics such as one or more of: boundedness (e.g., utilization of one or more of: processor, memory, network, storage, or cache), applied resource allocations, telemetry data, or workload performance characteristic(s).).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Barsness with the teachings of Subramanian where performing one of computing a negative reward value upon determining the KPI is not met with the predefined value or computing a positive reward value upon determining the KPI is met with the predefined value; sharing one of the positive reward value or the negative reward value to the processor; and storing learning of tuneable decision and one of the positive reward value or the negative reward value corresponding to a system load context information associated with the at least one application in the memory. Yalamanchili, Barsness, and Aseev teach of automated reconfiguration of the system configuration and selecting a hardware accelerator according to user usage patterns without needing a user to select a configuration. The reconfiguration is done using a neural network model. Similarly, Subramanian teaches of using a neural network to accelerate determinations of computing resources allocations based on telemetry data in order to suggest a resource allocation (Abstract). A reinforcement learning scheme is used to generate resource allocations based on positive or negative rewards. Positive and negative rewards allow the ML model to learn and understand the satisfaction of performance requirements, as discussed in Subramanian ([0066]). Using KPI and positive and negative rewards helps the ML model produce the best results that align with system and user needs/desires.
30. With regard to claim 12, Subramanian further teaches:
wherein the positive reward value and the negative reward value correspond to a system load context information associated with the at least one application in the memory ([0013] A reinforcement learning scheme can be used to generate resource allocation suggestions based on positive or negative rewards from prior resource allocation suggestions. When a resource allocation suggestion is applied, its associated KPI is compared to the previous workload run's performance (e.g., KPI) and a reward is calculated and accumulated in a workload table.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Yalamanchili, Barsness, and Aseev with the teachings of Subramanian wherein the positive reward value and the negative reward value correspond to a system load context information associated with the at least one application in the memory. As stated above, Subramanian teaches of using a neural network to accelerate determinations of computing resources allocations based on telemetry data in order to suggest a resource allocation. By associated the positive and negative reward value to a system load, the appropriate resource allocation can be made for a workload request (Abstract; [0013]).
31. With regard to claim 13, Subramanian further teaches:
wherein the reward value is negative if completion time has become worse compared to previous experience ([0013] A reinforcement learning scheme can be used to generate resource allocation suggestions based on positive or negative rewards from prior resource allocation suggestions; [0066] A reward value can be lower (or negative) for one or more of: failure to meet requirements of the SLA, failure to meet performance requirement(s), higher data center utilization by the workload (e.g., higher capacity to handle other workloads), an increase in end-to-end latency, and/or an increase in boundedness.), and
wherein the reward value is positive if completion time has become improved or remained same compared to previous experience ([0013] A reinforcement learning scheme can be used to generate resource allocation suggestions based on positive or negative rewards from prior resource allocation suggestions; [0066] A reward value can be positive for satisfaction of the SLA requirement(s) and one or more of: lower data center utilization by the workload (e.g., higher capacity to handle other workloads), satisfaction of performance requirements, steadiness or reduction in end-to-end latency, and/or steadiness or reduction in boundedness.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Yalamanchili, Barsness, and Aseev with the teachings of Subramanian herein the reward value is negative if completion time has become worse compared to previous experience, and wherein the reward value is positive if completion time has become improved or remained same compared to previous experience. As stated above, Subramanian teaches of using a neural network to accelerate determinations of computing resources allocations based on telemetry data in order to suggest a resource allocation. By associated the positive and negative reward value to a system load, the appropriate resource allocation can be made for a workload request (Abstract; [0013]). A reinforcement learning scheme is used to generate resource allocations based on positive or negative rewards. Positive and negative rewards allow the ML model to learn and understand the satisfaction of performance requirements, as discussed in Subramanian ([0066]). Using KPI and positive and negative rewards helps the ML model produce the best results that align with system and user needs/desires.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AN-AN N NGUYEN whose telephone number is (571)272-6147. The examiner can normally be reached Monday-Friday 8:00-5:00 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, AIMEE LI can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AN-AN NGOC NGUYEN/Examiner, Art Unit 2195
/Aimee Li/Supervisory Patent Examiner, Art Unit 2195