Prosecution Insights
Last updated: April 19, 2026
Application No. 18/520,646

APPARATUS AND METHOD OF PROCESSING DATA, ELECTRONIC DEVICE, AND STORAGE MEDIUM

Non-Final OA §103
Filed
Nov 28, 2023
Examiner
KIM, SISLEY NAHYUN
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
Kunlunxin Technology (Beijing) Company Limited
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
590 granted / 665 resolved
+33.7% vs TC avg
Strong +17% interview lift
Without
With
+16.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
42 currently pending
Career history
707
Total Applications
across all art units

Statute-Specific Performance

§101
9.1%
-30.9% vs TC avg
§103
49.6%
+9.6% vs TC avg
§102
26.1%
-13.9% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 665 resolved cases

Office Action

§103
CTNF 18/520,646 CTNF 87591 Notice of Pre-AIA or AIA Status 07-03-aia AIA 15-10-aia The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 07-06 AIA 15-10-15 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 07-20-aia AIA The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made 07-21-aia AIA Claim s 1, 3, 6, 10, 12, 15, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over CN113360266 (hereinafter CN) in view of Liu et al. (US 2020/0125508, hereinafter Liu). The English translation of CN is relied upon for claim mapping . Regarding claim 1, CN discloses An apparatus of processing data, the apparatus comprising: a first target storage unit; and a processor configured to (page 3: processor and memory) : determine an initial number of threads according to a data amount of target data and a capacity of the first target storage unit in response to determining that the data amount of the target data is less than or equal to the capacity of the first target storage unit (page 4: step 310, in response to determining that the number of the to-be-processed tasks in the waiting queue exceeds the first threshold, obtaining the current video memory capacity … Step 320, determining a first preset number based on the current video memory capacity and the capacity occupied by the tasks to be processed; page 5: step 410, obtaining the task to be processed and the thread capacity for processing the task to be processed … Based on the total display memory capacity and the thread capacity, a first number is determined, and a first number of threads is created, step 420; Note : The number task data should be less than or equal to memory capacity to prevent from memory overflow) ; and determine a first number of executable tasks according to the initial number of threads in response to determining that the initial number of threads is greater than or equal to a predetermined number of threads (pages 4-5: Step 140, inputting a first preset number of tasks to be processed into threads for parallel processing. In this embodiment, after the execution main body selects the first preset number of to-be-processed tasks from the wait queue, the first preset number of to-be-processed tasks may be directly input into the corresponding threads … Referring to fig. 3, fig. 3 shows a flowchart 300 of an embodiment of selecting a first preset number of pending tasks from the waiting queue, that is, the step 130 described above … Step 330, a first preset number of tasks to be processed is selected from the waiting queue. In this step, after determining the first preset number according to the current video memory capacity and the capacity occupied by the to-be-processed tasks, the execution main body may select the first preset number of to-be-processed tasks from the waiting queue corresponding to the thread, so as to directly input the first preset number of to-be-processed tasks into the corresponding thread, so that the thread performs parallel processing on the first preset number of to-be-processed tasks). CN does not disclose wherein the target data comprises input data to be processed, weight data to be processed, and output data . Liu discloses wherein the target data comprises input data to be processed, weight data to be processed, and output data (paragraphs [0055]-[0056], [0068]-[0076], [0081]-[0084], [0111]-[0119]: in machine-learning/neural-network processing, the target data loaded into near/on-chip memory includes input neuron data, weight data (model parameters), and output neuron data, and teaches statically analyzing and splitting these data blocks and sizing memory allocation accordingly to reduce I/O and improve processor throughput). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify CN’s thread/queue scheduler to apply Liu’s well-known deep-learning data-footprint characterization (i.e., treating each queued task as an inference job whose data footprint comprises inputs, weights, and outputs) and to use that data-footprint information in CN’s existing memory-based calculations for initial thread count and parallel dispatch. This modification would have been a routine design choice motivated by the mutual, predictable objective of both references to maximize processor/GPU utilization and reduce I/O stalls. Liu expressly teaches that such capacity-aware data allocation improves processing efficiency (Liu; paragraph [0094]). Regarding claim 10 referring to claim 1 , CN discloses A method of processing data, the method comprising: … (See the rejection for claim 1). Regarding claim 20 referring to claim 1 , CN discloses A non-transitory computer-readable storage medium having computer instructions therein, the computer instructions, when executed by a computer system, configured to cause the computer system to at least: … (page 3: the memory stores instructions executable by the at least one processor). Regarding claims 3 and 12, CN does not disclose further comprising a second target storage unit, wherein a capacity of the second target storage unit is greater than the capacity of the first target storage unit . Liu discloses further comprising a second target storage unit, wherein a capacity of the second target storage unit is greater than the capacity of the first target storage unit (paragraphs [0052] the storage capacity of the first memory 200 is smaller than the storage capacity of the second memory 300). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify CN’s thread/queue scheduler to include a second target storage unit of larger capacity as taught by Liu. Liu’s explicit teaching that a first (near) memory has smaller capacity than a second (larger) memory (paragraph [0052]) and its guidance to allocate input/weight/output data across the two memory levels would have been applied to CN’s scheduler to provide the recited two-tier memory arrangement. The combination is motivated by the shared, predictable objective in both references of maximizing GPU/processor utilization and minimizing I/O stalls: applying Liu’s memory hierarchy and data-footprint analysis to CN’s memory-driven thread and dispatch logic would enable the scheduler to better size thread pools and parallel dispatch while avoiding memory overcommitment (Liu; paragraph [0094] discussing improved processing efficiency). Thus, the addition of a second target storage unit having capacity greater than the first is an obvious design choice in view of Liu. Regarding claims 6 and 15, CN does not disclose wherein the processor is further configured to determine a third number of executable tasks according to an amount of resources required by the processor to process the target data, in response to determining that the data amount of the target data is greater than the capacity of the first target storage unit . Liu discloses wherein the processor is further configured to determine a third number of executable tasks according to an amount of resources required by the processor to process the target data, in response to determining that the data amount of the target data is greater than the capacity of the first target storage unit (paragraphs [0090]-[0094], [0111]-[0121]: When data volume exceeds first-memory capacity, split inputs/operations, determine how many blocks can fit, and process accordingly). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify CN’s thread/queue scheduler to incorporate Liu’s capacity-aware data-footprint teachings (i.e., to treat each queued task as an inference job whose memory footprint comprises input data, weight data, and output data) and to use that footprint information to compute a third number of executable tasks when the target data exceeds first-memory capacity. This modification is a routine design choice motivated by the common, predictable objective of both references to maximize processor/GPU utilization and minimize I/O stalls. Liu expressly teaches that splitting and sizing work according to memory capacity improves processing efficiency (Liu; paragraph [0094]). Regarding claim 19, CN discloses An electronic device comprising the apparatus according to claim 1 (See the rejection for claim 1) . Allowable Subject Matter 12-151-08 AIA 07-43 12-51-08 Claim s 2, 4, 5, 7-9, 11, 13, 14, 16-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SISLEY N. KIM whose telephone number is (571)270-7832. The examiner can normally be reached M-F 11:30AM -7:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice . If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Y. Blair can be reached on (571)270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov . Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SISLEY N KIM/Primary Examiner, Art Unit 2196 3/8/2026 Application/Control Number: 18/520,646 Page 2 Art Unit: 2196 Application/Control Number: 18/520,646 Page 3 Art Unit: 2196 Application/Control Number: 18/520,646 Page 4 Art Unit: 2196 Application/Control Number: 18/520,646 Page 5 Art Unit: 2196 Application/Control Number: 18/520,646 Page 6 Art Unit: 2196 Application/Control Number: 18/520,646 Page 7 Art Unit: 2196
Read full office action

Prosecution Timeline

Nov 28, 2023
Application Filed
Mar 08, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602254
JOB NEGOTIATION FOR WORKFLOW AUTOMATION TASKS
2y 5m to grant Granted Apr 14, 2026
Patent 12602260
COMPUTER-BASED PROVISIONING OF CLOUD RESOURCES
2y 5m to grant Granted Apr 14, 2026
Patent 12591474
BATCH SCHEDULING FUNCTION CALLS OF A TRANSACTIONAL APPLICATION PROGRAMMING INTERFACE (API) PROTOCOL
2y 5m to grant Granted Mar 31, 2026
Patent 12585507
LOAD TESTING AND PERFORMANCE BENCHMARKING FOR LARGE LANGUAGE MODELS USING A CLOUD COMPUTING PLATFORM
2y 5m to grant Granted Mar 24, 2026
Patent 12578994
SYSTEMS AND METHODS FOR TRANSITIONING COMPUTING DEVICES BETWEEN OPERATING STATES
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+16.9%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 665 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month