Prosecution Insights
Last updated: April 19, 2026
Application No. 19/033,396

PROCESSING METHOD, ELECTRONIC DEVICE, AND STORAGE MEDIUM

Non-Final OA §103§112
Filed
Jan 21, 2025
Examiner
GOLDSCHMIDT, CRAIG S
Art Unit
2132
Tech Center
2100 — Computer Architecture & Software
Assignee
Smarter Silicon (Shanghai) Technologies Co. Ltd.
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
293 granted / 401 resolved
+18.1% vs TC avg
Strong +32% interview lift
Without
With
+32.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
21 currently pending
Career history
422
Total Applications
across all art units

Statute-Specific Performance

§101
6.9%
-33.1% vs TC avg
§103
46.4%
+6.4% vs TC avg
§102
9.4%
-30.6% vs TC avg
§112
29.4%
-10.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 401 resolved cases

Office Action

§103 §112
DETAILED ACTION This action responds to Application No. 19/033396, filed 01/21/2025. Claims 1-18 are presented for examination. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 2 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention. The limitation “before based on the identifier of the first task, obtaining the target cache parameters of the first task […] wherein a cache unit corresponding to a first core includes cache data of the first task” (lines 1-6) does not provide sufficient detail to allow one having ordinary skill in the art to practice the specific timing of the invention. More specifically, it is unclear how one having ordinary skill in the art would know which cache units include data associated with a first task “before […] obtaining the target cache parameters of the first task”, since the target cache parameters are what “characterizes a quantity of cached items of the first task in a cache unit”. It is noted that corresponding claims 8 and 14 are otherwise the same, but omit this timing language. Claim 5 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention. The limitation “before selecting the core with the target cache parameter corresponding to the largest quantity of cached data […] the core with the target cache parameter corresponding to the largest quantity of cached data items “ (lines 1-5) does not provide sufficient detail to allow one having ordinary skill in the art to practice the specific timing of the invention. More specifically, it is unclear how one having ordinary skill in the art would know which core has the largest quantity of cached items “before selecting the core with the target cache parameter corresponding to the largest quantity of the cached items”. It is noted that corresponding claims 11 and 17 are otherwise the same, but omit this timing language. Claims 3-5 are rejected as being dependent upon claim 2 above. Claims 3, 9, and 15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The limitation “in response to that all of the multiple first cores of the processor are not in an idle state” (e.g. claim 3, line 2) directly conflicts with the limitation “a plurality of idle state first cores from multiple first cores” in parent claims 2, 8, and 14 (e.g. claim 2, line 4). If a plurality of idle cores have been selected from the multiple first cores, then it is not possible that none of the multiple first cores are in an idle state. Claims 5, 11, and 17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention, as follows: Language “obtaining a second performance parameter of the core with the target cache parameter corresponding to the largest quantity of the cached data items and a second performance parameter of each second core in the processor, wherein a first performance parameter of each second core satisfies the performance requirement of the first task; and (e.g. claim 5, lines 4-7). This limitation is indefinite, for 2 reasons. First, the limitation “each second core in the processor” provides ambiguous antecedent basis, as there is no prior mention of second cores, and it is unclear whether this was intended to serve as initial antecedent basis. Second, Applicant discloses obtaining […] a second performance parameter of each second core”, but then discloses that “a first performance parameter of each second core satisfies the performance requirement”. It is noted that respective parent claims 3, 9, and 15 disclose first performance parameters, but as applied to first cores. Accordingly, it is unclear whether the first performance parameter of each second core represents a typographical error, and should be “second performance parameter”, or whether it is intended to refer to the first performance parameter of claims 3, 9, and 15, or if it is a separate first performance parameter applied to a different set of cores. As Examiner is unable to determine the intended meaning, the limitation is indefinite; Language “in response to that performance characterized by the second performance parameter of the core with the target cache parameter corresponding to the largest quantity of the cached data items is better than performance characterized by the second performance parameter of each second core, configuring the core with the target cache parameter corresponding to the largest quantity of the cached data items as the target core” (e.g. claim 5, lines 8-12). This limitation is indefinite, as it appears to make the step of selecting the core with the target cache parameter corresponding to a largest quantity of the cached data, which is positively recited in parent claims 4, 10, and 16, further conditional upon having better performance than other cores. A dependent claim may not make conditional a limitation that was not conditional in the parent claim, as this would imply that the condition could fail, and thus the core having the largest number of cached items might not be selected, which would conflict with the unambiguous selection of the core in the parent claim. Claim 7 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Langauge “a detection module, configure to detect a target cache region updated in the cache unit corresponding to each core of the processor, initiate an identifier for a target task updated, configure an identifier for a task belonging to the target cache region updated as the identifier for the target task, and record a cache parameter corresponding to each task in each core, wherein the cache parameter characterizes a quantity of cached data items of each task in the cache unit corresponding to each core” (lines 5-10). This limitation is indefinite, for 2 reasons. First, it is non-idiomatic English. In particular, the terms “a target cache region updated” and “a target task updated” are ambiguous limitations. Second, this limitation contains several terms which include antecedent basis as to whether they are singular or plural. For example, it is ambiguous whether there is a single target cache region updated for all cores of the processor, or whether there is a separate respective target cache region updated for each respective core. Similarly, it is unclear whether there is a single cache parameter corresponding to each task in each core, or whether there is a respective cache parameter for each task and for each core, or if there is a respective cache parameter for each unique respective combination of task and core. Claims 4-5, 8-12, 10-11, and 16-17 are rejected as being dependent upon one of claims 3, 7, 9, and 15 above, respectively. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-18 are rejected under 35 U.S.C. 103 as being unpatentable over Chang et al (US 2015/0324234) in view of Soundararajan et al (US 2016/0179581 A1). Re claim 1, Chang discloses the following: A processing method, comprising: obtaining an identifier for a first task (¶ 7). The task scheduler identifies a task as part of a thread group (obtaining an identifier); based on the identifier for the first task, obtaining target cache parameters of the first task corresponding to all cores of a processor, wherein a target cache parameter characterizes […] cached data items of the first task in a cache unit corresponding to a core; and (¶ 26). A first task belonging to a task group (identifier) contains information about specific data and/or specific memory addresses associated with that task. In the example given, at least one piece of said data and/or specific memory address has been accessed by another task of the same task group, and is thus cached data item[s]; accordingly, these parameters are “cache parameters” as they relate to data that is more likely to be cached, as it is being used by other tasks in the group. Furthermore, grouping tasks in this way increases a cache hit rate, which implies that data items will already be cached due to the other tasks in the group; based on the target cache parameters corresponding to all cores, selecting a target core from all cores to process the first task (¶ 26). Based on the specified data and/or specific memory addresses (target cache parameters), a target core is selected for executing the first task. As noted above, Chang discloses grouping tasks onto cores sharing a common L2 cache such that the cache hit rate is increased, wherein in the example given, at least one piece of the specified data/memory address has been cached by another task in the group, and can thus be fetched from the cache by the first task without retrieving it from memory (¶ 26). However, while this implies that at least some of the specified data/memory addresses are cached, Chang does not explicitly disclose that the cache parameter “characterizes a quantity of cached data items of the first task”. Accordingly, Examiner has provided Soundararajan. Soundararajan discloses that a target cache parameter characterizes a quantity of cached items from a first task in a cache unit corresponding to a core (¶ 23 and 67-69). The resource management module assigns tasks to computing nodes (cores) based on a number of blocks of data (quantity of cached items) associated with a task that is cached at the node (in a cache unit corresponding to a core). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the cache coherence-aware task assignment of Chang to base task assignment on amount of data cached, as in Soundararajan, because Soundararajan suggests that utilizing content-aware task assignment would improve performance of applications by allowing them to take advantage of cached data having higher transfer rates (¶ 31). Re claim 2, Chang and Soundararajan disclose the method of claim 1, and Chang further discloses that before based on the identifier for the first task, obtaining the target cache parameters of the first task corresponding to all cores of the processor, further including: selecting a plurality of idle state first cores from multiple first cores of the processor as a plurality of cores, wherein a cache unit corresponding to a first core includes cache data of the first task (¶ 38-41). The timing of this limitation is insufficiently enabled, as noted above. Accordingly, Examiner interprets this limitation to not be limited to the specific timing. Chang discloses that in conjunction with the task grouping to increase the odds of data being cached, cores may be selected from a group of cores based on there being one or more idle cores (or most idle, in the case of no idle cores). Re claim 3, Chang and Soundararajan disclose the method of claim 2, and Change further discloses that in response to that all of the multiple first cores of the processor are not in an idle state, selecting a plurality of first cores, with first performance parameters satisfying a performance requirement of the first task, from the multiple first cores as the plurality of cores (¶ 38-41). This limitation is indefinite, as noted above. Examiner interprets it to mean selecting a plurality of cores satisfying a performance requirement. The selected core may be selected from the group of one or more idle, or one or more lightest-loaded, cores. Re claim 4, Chang and Soundararajan disclose the method of claim 3, and Soundararajan further discloses that based on the target cache parameters corresponding to all cores, selecting the target core from all cores to process the first task includes: based on the target cache parameters of the first task corresponding to all cores, selecting a core with a target cache parameter corresponding to a largest quantity of the cached data items from all cores as the target core (¶ 67-69). The resource management module may select a node (core) with the largest number of cached data blocks (largest quantity of cached data items) as the target node (core). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to combine Chang and Soundararajan, for the reasons noted in claim 1 above. Re claim 5, Chang and Soundararajan disclose the method of claim 4, and Soundararajan further discloses the following: before selecting the core with the target cache parameter corresponding to the largest quantity of the cached data items from all cores as the target core, further including: (¶ 67-69). This limitation is insufficiently enabled, as noted above. Accordingly, Examiner interprets this limitation as not being limited to the specific timing. Soundararajan discloses performing a node (core) selection using multiple factors, including the quantity of cached items as well as performance; obtaining a second performance parameter of the core with the target cache parameter corresponding to the largest quantity of the cached data items and a second performance parameter of each second core in the processor, wherein a first performance parameter of each second core satisfies the performance requirement of the first task; and (¶ 67-69). This limitation is indefinite, as noted above. Examiner interprets it to mean obtaining a second parameter in addition to the quantity of cached items, wherein the second parameter is a performance parameter. The resource management module assigns tasks based on the largest quantity of cached items, as well as additional parameters (second performance parameter) of each node (core), such as available processing cycles, available memory, etc.; in response to that performance characterized by the second performance parameter of the core with the target cache parameter corresponding to the largest quantity of the cached data items is better than performance characterized by the second performance parameter of each second core, configuring the core with the target cache parameter corresponding to the largest quantity of the cached data items as the target core (¶ 67-69). This limitation is indefinite, as noted above. Examiner interprets it to mean assigning the task to a core based on both the quantity of cached data items and the performance parameter. Soundararajan discloses ranking nodes (cores) in descending order of number of blocks cached, as well as ranking based on performance. It discloses that the performance parameter may be prioritized over the cached block parameter. While it does not explicitly disclose the situation where a node both has the highest amount of cached data and has the highest available performance, since both of these parameters are used to prioritize selection of a node, it would have been obvious to one having ordinary skill in the art that in a situation where a node is both the top-ranked node according to data cached as well as the top-ranked node according to performance, it would be selected. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to combine Chang and Soundararajan, for the reasons noted in claim 1 above. Re claim 6, Chang and Soundararajan disclose the method of claim 1, and Chang further discloses obtaining a processing priority parameter of the first task; and in response to a determination that the processing priority of the first task satisfies a set executing condition based on the processing priority parameter, controlling the target core to process the first task with priority (¶ 30, 33, and 39). The system may set tasks to have different priorities, and may execute tasks based on those priorities (determination that the processing priority of the first task satisfies a set executing condition). Re claim 7, Chang discloses the following: An electronic device, comprising: a cache, including a plurality of cache units (Fig. 1, L2 caches 114_1 to 114_N). The electronic device comprises a cache distributed across a plurality of L2 caches (cache units); one or more processors each including a plurality of cores, wherein each core corresponds to a cache unit, and cache units corresponding to all cores are different; and (Fig. 1, processor cores 117 and 118; L2 caches 114_1 to 114_N). There are one or more clusters (collectively one or more processors) each including one or more cores. The cache units correspond to all cores, and they are different. It is noted that Applicant has not explicitly disclosed what the cache unit must be different than; accordingly, Examiner interprets it to mean there are a plurality of cache units, which collectively correspond to all cores, and the cache units are different from one another; a detection module, configure to detect a target cache region updated in the cache unit corresponding to each core of the processor, initiate an identifier for a target task updated, configure an identifier for a task belonging to the target cache region updated as the identifier for the target task, and record a cache parameter corresponding to each task in each core, wherein (¶ 26). This limitation is indefinite, as noted above. Examiner interprets it to mean configuring an identifier for a task which characterizes cached data items of a task. The task scheduler (detection module) detects information about the tasks; the cache parameter characterizes […] cached data items of each task in the cache unit corresponding to each core; and (¶ 26). Each task belonging to a task group (identifier) contains information about specific data and/or specific memory addresses associated with that respective task. In the example given, at least one piece of said data and/or specific memory address has been accessed by another task of the same task group, and is thus cached data item[s]; accordingly, these parameters are “cache parameters” as they relate to data that is more likely to be cached, as it is being used by other tasks in the group. Furthermore, grouping tasks in this way increases a cache hit rate, which implies that data items will already be cached due to the other tasks in the group; the one or more processors are configured to initiate a cache parameter obtaining request to the detection module based on an identifier for a first task (Fig. 1; ¶ 26). The multi-core processor system (one or more processors) initiates the process of identifying a task group for a task based on the task group it is associated with (identifier for a first task) using the statistics unit in the task scheduler (detection module); the detection module is further configured to, in response to the cache parameter obtaining request, obtain target cache parameters corresponding to the first task in all cores based on the identifier for the first task; and (¶ 26). The task scheduler (detection module) is configured to compare thread group tasks sharing common data/memory addresses which are likely to be cached (target cache parameters) to a first task to find a group that the first task belongs to; the one or more processors are configured to select a target core from all cores to process the first task based on the target cache parameters corresponding to all cores (¶ 26). Based on the specified data and/or specific memory addresses (target cache parameters), a target core is selected for executing the first task. As noted above, Chang discloses grouping tasks onto cores sharing a common L2 cache such that the cache hit rate is increased, wherein in the example given, at least one piece of the specified data/memory address has been cached by another task in the group, and can thus be fetched from the cache by the first task without retrieving it from memory (¶ 26). However, while this implies that at least some of the specified data/memory addresses are cached, Chang does not explicitly disclose that the cache parameter “characterizes a quantity of cached data items of the first task”. Accordingly, Examiner has provided Soundararajan. Soundararajan discloses that the cache parameter characterizes a quantity of cached items from each task in the cache unit corresponding to each core (¶ 23 and 67-69). See claim 1 above. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to combine Chang and Soundararajan, for the reasons noted in claim 1 above. Re claims 8-12, Chang and Soundararajan disclose the methods of claims 2-6 above, respectively; accordingly, they also disclose electronic devices implementing those methods, as in claims 8-12, respectively (¶ 29) Re claims 13-18, respectively, Chang and Soundararajan disclose the methods of claims 1-6 above, respectively; accordingly, they also disclose computer-readable storage media storing instructions implementing those methods, as in claims 13-18, respectively (¶ 29). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Xu et al (US 2015/0205642 A1). Uses a cache-related parameter related to cache misses to determine task migration (¶ 20). Any inquiry concerning this communication or earlier communications from the examiner should be directed to CRAIG S GOLDSCHMIDT whose telephone number is (571)270-3489. The examiner can normally be reached M-F 10-6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hosain Alam can be reached at 571-272-3978. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CRAIG S GOLDSCHMIDT/Primary Examiner, Art Unit 2132
Read full office action

Prosecution Timeline

Jan 21, 2025
Application Filed
Mar 18, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596650
Preemptive Flushing of Processing-in-Memory Data Structures
2y 5m to grant Granted Apr 07, 2026
Patent 12596481
PREFETCHING DATA USING PREDICTIVE ANALYSIS
2y 5m to grant Granted Apr 07, 2026
Patent 12585411
Optics-Based Distributed Unified Memory System
2y 5m to grant Granted Mar 24, 2026
Patent 12578854
COMPOSITE OPERATIONS USING MULTIPLE HIERARCHICAL DATA SPACES
2y 5m to grant Granted Mar 17, 2026
Patent 12578883
ELASTIC EXTERNAL STORAGE FOR DISKLESS HOSTS IN A CLOUD
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+32.1%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 401 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month