Prosecution Insights
Last updated: April 18, 2026
Application No. 18/184,659

ELECTRONIC APPARATUS AND IMAGE PROCESSING METHOD OF ELECTRONIC APPARATUS

Non-Final OA §103
Filed
Mar 16, 2023
Examiner
MENBERU, BENIYAM
Art Unit
2681
Tech Center
2600 — Communications
Assignee
Huawei Technologies Co., Ltd.
OA Round
3 (Non-Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
87%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
519 granted / 707 resolved
+11.4% vs TC avg
Moderate +13% lift
Without
With
+13.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
33 currently pending
Career history
740
Total Applications
across all art units

Statute-Specific Performance

§101
10.1%
-29.9% vs TC avg
§103
62.2%
+22.2% vs TC avg
§102
10.7%
-29.3% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 707 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed February 4, 2026 have been fully considered but they are not persuasive. In the Remarks, dated February 4, 2026, page 6, last two paragraphs and top two paragraph of page 7, Applicant stated that in Wang the SOC includes the ISP and AP chip and another chip includes the NPU and built-in memory 230. Applicant stated that therefore the ISP, NPU, and the memory 230 are not integrated onto the same chip. However Examiner disagrees because as shown in Fig. 27 of Wang, the multimedia processing chip 200 integrates in same chip the ISP 210, NPU 220, and built-in memory 230. In the paragraph 65 of Wang, the SOC described is related to the AP chip 400 which has as shown in Fig. 27 its own ISP 420 and AP 410. Therefore the ISP 420 in SOC chip 400 is not related to the ISP of multimedia chip 200. Thus the multimedia chip integrates the ISP 210, memory 230, and NPU 220 of its own (paragraph 229-240). Further on page 7, 3rd paragraph, Applicant stated that the intermediate data in Wang is stored in built-in cache of the NPU and not in a separate built-in on chip memory as claimed. However in Wang paragraph 168-169 it states the optimizing module 214 of the ISP 210 can first process data and send processed data to memory 230. This data is transferred from memory 230 to the NPU which then processes this data and then this processed data is stored in memory 230 and then transferred back to the optimizing module 214 of the ISP for "second optimization" in block 2046. Thus the data processed and output by the NPU prior to "second optimization" is intermediate data that is stored in memory 230 prior to transfer to ISP. It is intermediate data since there is subsequent processing of this data. Thus the memory 230 can be used both for transfer of image block data between ISP and NPU (paragraph 105) and also for storing intermediate data of the NPU as stated above. On page 7, last paragraph to top of page 8, Applicant stated that Wang only processes the partial image in the cache of the NPU but does not transfer the partial image with the memory 230. However the claim 1 limitation states that on-chip memory is "configured to transfer an image block in the first image signal between the ISP and Ai". The image block (partial image) is related to first image signal which is output from the ISP and then processed by the AI processor. In Wang it transfers partial image of 1/n frame of first image from the memory 230 to the NPU (paragraph 105, 247; image processed by the ISP via optimizing module 214 is stored to memory 230 as first image; paragraph 105; image transferred from memory 230 to NPU is in image data blocks of first image that are part of frame (1/n frame)). The caching of image in NPU is related to partial image processed after it is transferred to NPU. Applicant on page 8, stated regarding the Larson reference fails to teach to integrate on SOC, the ISP, AI processor and on-chip memory and transferring of images blocks between thereto. However this argument is moot because as stated above Wang teaches the limitations of claim 1 that was argued in the Remarks on pages 6-8. The system of Larson was relied on to teach storing of image tile blocks to the on-chip memory. As stated above Wang teaches to integrate on the multimedia chip 200 (SOC), the ISP, NPU (AI processor), and the built-in memory 230 (on-chip memory). Further on bottom of page 8 to top of page 9, Applicant stated Larson lacks motive to combine it with Wang. However Larson teaches to write compressed data from an ISP to the on-chip memory in image tile forms that can be scaled to different block sizes without overlapping of tiles which simplifies firmware (paragraph 25). Claim Objections Claim 15 is objected to because of the following informalities: In claim 15, “An non-transitory computer-readable” should be amended to “A non-transitory computer-readable”. Appropriate correction is required. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on March 12, 2026 has been entered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 6, 8-10, 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20230086519 to Wang in view of US 20200402263 to Larson. !!NOTE under AIA the foreign priority date of Wang ‘519 is used as effectively filed date!! Regarding claim 1, Wang discloses a system on chip (SOC) integrated into a same chip (paragraph 101; multimedia processing chip (System on chip on same chip 200)), comprising: an image signal processor (ISP), configured to receive an image data from an image sensor, and perform third image signal processing on the image data to obtain a first image signal (paragraph 105, 118, 124, 247; ISP receives image from camera; optimizing module 214 of ISP process image via BPC (third image signal processing) to generate first image signal); an artificial intelligence (AI) processor, configured to perform first image signal processing on the first image signal to obtain a second image signal (paragraph 118-119; ISP includes optimizing module 214 that performs image processing such as BPC on dynamic image to generate optimized first image; paragraph 73, 98, 247; NPU (AI processor) processes this optimized image to generate processed image data (second image)), wherein the AI processor includes a dedicated neural processor (paragraph 104; NPU includes dedicated neural processor) and on-chip memory, coupled to the AI processor and the ISP and configured to transfer an image block in the first image signal between the ISP and the AI processor (paragraph 105, 247; memory 230 is on chip memory for chip 200 and is coupled to NPU 220 and ISP 210 in Fig. 6; image processed by the ISP includes optimizing module 214 is stored to memory 230 which is then sent to the NPU (AI); paragraph 105; NPU processor reads part of the image frame (partial) in terms of data blocks from memory), wherein the image block is a partial image signal in a frame of the first image signal (paragraph 105; image transferred from memory 230 to NPU is in image data blocks that are part of frame (1/n frame)); wherein the on-chip memory is further configured to store intermediate data generated in a running process of the AI processor (paragraph 168-169; optimizing module 214 of the ISP 210 can first process data and send processed data to memory 230 as “first optimizing” data ; this data is transferred from memory 230 to the NPU which then processes this data and then this processed data by NPU (running process) is stored back in memory 230 and then transferred back to the optimizing module 214 of the ISP for "second optimization" in block 2046; data processed, output by the NPU and stored in memory 230 prior to "second optimization" is intermediate data since the processing is not complete and further processing (second optimization) is performed by the optimizing module 214). However Wang does not disclose wherein the ISP stores the first image signal to the on- chip memory in a form of the image block. Larson discloses wherein the ISP stores the first image signal to the on- chip memory in a form of the image block (paragraph 26, 48-49; compressor 15 (ISP) stores compressed imaged data into on-chip memory 60 in form of image tiles (image blocks)). It would have been obvious to one of ordinary skill in the art at the time of the invention was made to modify the system of Wang as taught by Larson to provide image block storage in memory. The motivation to combine the references is to provide compressed storage of image data in tile blocks in on chip memory that can be scaled to various block sizes without overlapping tiles which thereby removes complexity associated with firmware use (paragraph 25). Regarding claim 6, Wang discloses the system on chip according to claim 1, wherein the third image signal processing comprises at least one of the following processing processes: noise cancellation, black level calibration, shadow calibration, white balance calibration, or demosaicing (paragraph 263; chip 200 performs noise reduction). Regarding claim 8, Wang discloses the system on chip according to claim 1, further comprising: a central processing unit CPU, configured to control running of the ISP and the AI processor (paragraph 133, 172; CPU 260 controls the NPU (AI) and ISP including optimizing module 214). Regarding claim 9, Wang discloses the system on chip according to claim 1, wherein the on-chip memory comprises an on-chip random access memory RAM (paragraph 127; memory 230 can be SDRAM). Regarding claim 10, Wang discloses the system on chip according to claim 1, wherein the dedicated neural processor comprises a neural processing engine (paragraph 104; NPU (Neural processor) includes dedicated processor). Regarding claim 14, Wang discloses an image processing method of a system on chip (SOC) integrated into a same chip (paragraph 101; multimedia processing chip (System on chip on same chip 200)), wherein the method comprises: receiving an image data from an image sensor and perform third image signal processing on the image data to obtain a first image signal, by an image signal processor ISP in the system on chip (paragraph 105, 118, 124, 247; ISP receives image from camera; optimizing module 214 of ISP process image via BPC (third image signal processing) to generate first image signal; paragraph 114; ISP 210 in system on chip 200); transferring, by an on-chip memory in the system on chip, an image block in the first image signal between the ISP and an artificial intelligence AI processor in the system on chip, wherein the image block is a partial image signal in a frame of the first image signal (paragraph 105, 247; memory 230 is on chip memory for chip 200 and is coupled to NPU 220 and ISP 210 in Fig. 6; image processed by the ISP includes optimizing module 214 is stored to memory 230 which is then sent to the NPU (AI); paragraph 105; NPU processor reads part of the image frame (partial) in terms of data blocks from memory),; and performing, by the AI processor, first image signal processing on the first image signal to obtain a second image signal (paragraph 118-119; ISP includes optimizing module 214 that performs image processing such as BPC on dynamic image to generate optimized first image; paragraph 73, 98, 247; NPU (AI processor) processes this optimized image to generate processed image data (second image)); wherein the on-chip memory is further configured to store intermediate data generated in a running process of the AI processor (paragraph 168-169; optimizing module 214 of the ISP 210 can first process data and send processed data to memory 230 as “first optimizing” data ; this data is transferred from memory 230 to the NPU which then processes this data and then this processed data by NPU (running process) is stored back in memory 230 and then transferred back to the optimizing module 214 of the ISP for "second optimization" in block 2046; data processed, output by the NPU and stored in memory 230 prior to "second optimization" is intermediate data since the processing is not complete and further processing (second optimization) is performed by the optimizing module 214). However Wang does not disclose the ISP stores the first image signal to the on-chip memory in a form of the image block. Larson discloses the ISP stores the first image signal to the on-chip memory in a form of the image block (paragraph 26, 48-49; compressor 15 (ISP) stores compressed imaged data into on-chip memory 60 in form of image tiles (image blocks)). It would have been obvious to one of ordinary skill in the art at the time of the invention was made to modify the system of Wang as taught by Larson to provide image block storage in memory. The motivation to combine the references is to provide compressed storage of image data in tile blocks in on chip memory that can be scaled to various block sizes without overlapping tiles which thereby removes complexity associated with firmware use (paragraph 25). Regarding claim 15, see rejection of claim 14. Further Wang discloses an non-transitory computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and when being executed by at least one processor in a system on chip, the computer program being used to implement a method (paragraph 104, 127, 133; CPU executing program instructions together with memory 230 storing program instructions to implement method). Claim(s) 2-3, 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20230086519 to Wang in view of US 20200402263 to Larson further in view US 20180137603 to Hsiao. Regarding claim 2, Wang does not disclose the system on chip according to claim 1, wherein the ISP is further configured to: perform second image signal processing on the second image signal to obtain an image processing result. Hsiao discloses wherein the ISP is further configured to: perform second image signal processing on the second image signal to obtain an image processing result (paragraph 33-34; image processor 36 receives post-processing data from CNN 35 as second image signal and processes (second image signal processing) this data to generate forward prediction high resolution frame (image processing result)). It would have been obvious to one of ordinary skill in the art at the time of the invention was made to modify the system of Wang as taught by Hsiao to provide ISP that performs additional image processing on AI processed data. The motivation to combine the references is to provide cooperative image processing of image data such that encoding/decoding ISP processing can be processed separately from the processing of the AI processor that processes the resolution of the image such that high quality image of high resolution can be output (paragraph 7, 26-28). Regarding claim 3, Hsiao discloses the system on chip according to claim 1 wherein the third image signal processing comprises a plurality of processing processes, and in two adjacent processing processes in the plurality of processing processes (paragraph 20, third signal processing includes decoder 32 and encoder 38 as adjacent processes), a previous processing process is used to generate a third image signal, and a next processing process is used to process a fourth image signal (paragraph 25-26, 28; decoder 32 (previous processing) generates decoded data (third image) and encoder 38 (next processing) encodes received super-resolution frame (fourth image)); and the AI processor is further configured to perform fourth image signal processing on the third image signal to obtain the fourth image signal (paragraph 26-27; convolutional neural network module 35 (AI processor) performs “super-resolution reconstruction image processing” (fourth image signal processing) on decoded data (third image) to obtain super-resolution image (fourth image) ). Regarding claim 5, Wang discloses the system on chip according to claim 2 wherein the second image signal processing comprises at least one of the following processing processes: noise cancellation, black level calibration, shadow calibration, white balance calibration, demosaicing, color difference calibration, gamma calibration, or RGB-to- YUV domain conversion (paragraph 263; chip 200 performs noise reduction). Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20230086519 to Wang in view of US 20200402263 to Larson in view of US 20190320127 to Buckler. Regarding claim 4, Wang does not disclose the system on chip according to claim 1, wherein the first image signal processing comprises at least one of the following processing processes: noise cancellation, black level calibration, shadow calibration, white balance calibration, demosaicing, color difference calibration, or gamma calibration. Buckler discloses wherein the first image signal processing comprises at least one of the following processing processes: noise cancellation, black level calibration, shadow calibration, white balance calibration, demosaicing, color difference calibration, or gamma calibration (paragraph 34; AI processor performs first image signal processing such as demosaicing). It would have been obvious to one of ordinary skill in the art at the time of the invention was made to modify the system of Wang as taught by Buckler to provide first image processing including demosaicing. The motivation to combine the references is to provide efficient and power saving system for image processing for computer vision mode executed by AI processors by generating image data of lower quality that is sufficient for machine vision without generating high quality image that requires a lot of processing (paragraph 21). Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20230086519 to Wang in view of US 20200402263 to Larson further in view US 20180137603 to Hsiao further in view of US 20190320127 to Buckler. Regarding claim 7, Wang does not disclose the electronic apparatus according to claim 3, wherein the fourth image signal processing comprises at least one of the following processing processes: black level calibration, shadow calibration, white balance calibration, demosaicing, or color difference calibration. Buckler discloses wherein the fourth image signal processing comprises at least one of the following processing processes: black level calibration, shadow calibration, white balance calibration, demosaicing, or color difference calibration (paragraph 37, 44; AI processor executes fourth image processing in step 218 including demosaicing). It would have been obvious to one of ordinary skill in the art at the time of the invention was made to modify the system of Wang as taught by Buckler to provide first image processing including demosaicing. The motivation to combine the references is to provide efficient and power saving system for image processing for computer vision mode executed by AI processors by generating image data of lower quality that is sufficient for machine vision without generating high quality image that requires a lot of processing (paragraph 21). Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20230086519 to Wang in view of US 20200402263 to Larson further in view of US 20210295145 to Bayat. Regarding claim 12, Wang does not disclose the system on chip according to claim 1, wherein the on-chip memory is further configured to store weight data of each network node in a neural network run by the AI processor. Bayat discloses wherein the on-chip memory is further configured to store weight data of each network node in a neural network run by the AI processor (paragraph 19, 53, 61; on-chip memory stores weight of the layer (nodes) of neural network (accelerator processor)). It would have been obvious to one of ordinary skill in the art at the time of the invention was made to modify the system of Wang as taught by Bayat to provide on-chip storage of node weights. The motivation to combine the references is to provide reduction in cost that is associated with having the node weights in an external memory by placing the node weights in on-chip memory (paragraph 61). Claim(s) 13, 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20230086519 to Wang in view of US 20200402263 to Larson further in view of US 20200160494 to Hwang. Regarding claim 13, Wang does not disclose the system on chip according to claim 1, wherein an interrupt signal is transmitted between the AI processor and the ISP through an electronic line connection. Hwang discloses wherein an interrupt signal is transmitted between the AI processor and the ISP (paragraph 113, 157; interrupt request between image processor (processor 120) and neural network). It would have been obvious to one of ordinary skill in the art at the time of the invention was made to modify the system of Wang as taught by Hwang to provide interrupt signals between AI processor and ISP device. The motivation to combine the references is to provide manual interruption of image processing via user request to stop processing in the neural network (paragraph 113). Although Hwang does not disclose “through an electronic line connection”, Examiner is taking official notice that electronic line connection are inherent to all electronic devices Regarding claim 16, Hwang discloses the system on chip according to claim 1, wherein the AI processor is further configured to run one or more image processing models to perform the first image signal processing (paragraph 71, 151; processor 120 (AI) performs image processing using neural network that is trained model). Regarding claim 17, Hwang discloses the system on chip according to claim 16, wherein the AI processor is further configured to load an executable program in an off-chip memory to run the one or more image processing models (paragraph 148-151, 159, 168; processor 120 including 1410/1420 on a chip and it loads program from memory 130 which is off chip as shown in Fig. 14; program is used to execute the models for image processing). Regarding claim 18, Hwang discloses the system on chip according to claim 1, further comprising a communication unit (paragraph 182; communication interface 1750). Regarding claim 19, Hwang discloses the system on chip according to claim 18, wherein the communication unit comprises at least one of: a short-distance communication unit or a cellular communication unit (paragraph 182; Bluetooth network 1752 (short distance)). Regarding claim 20, Hwang discloses the system on chip according to claim 1, further comprising a graphics processing unit (GPU) (paragraph 168; GPU). Other Prior Art Cited 14. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20210004941 to Hanwell. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENIYAM MENBERU whose telephone number is (571) 272-7465. The examiner can normally be reached on Monday-Friday, 10:00am-6:30pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi Sarpong can be reached on (571) 270-3438. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Any inquiry of a general nature or relating to the status of this application or proceeding should be directed to the customer service office whose telephone number is (571) 272-2600. The group receptionist number for TC 2600 is (571) 272-2600. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. For more information about the PAIR system, see <http://pair-direct.uspto.gov/>. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Patent Examiner Beniyam Menberu /BENIYAM MENBERU/Primary Examiner, Art Unit 2681 04/03/2026
Read full office action

Prosecution Timeline

Mar 16, 2023
Application Filed
May 01, 2025
Non-Final Rejection — §103
Aug 05, 2025
Response Filed
Nov 07, 2025
Final Rejection — §103
Feb 04, 2026
Response after Non-Final Action
Mar 12, 2026
Request for Continued Examination
Mar 16, 2026
Response after Non-Final Action
Apr 03, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593978
SYSTEM AND METHOD FOR IDENTIFYING A DISEASE AFFECTED AREA
2y 5m to grant Granted Apr 07, 2026
Patent 12594480
EXERCISE SUPPORT DEVICE OPERATING WITH WEIGHT TRAINING EQUIPMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12594048
NOISE ANALYSIS SYSTEMS AND METHODS
2y 5m to grant Granted Apr 07, 2026
Patent 12585170
METHOD AND APPARATUS FOR DISPLAYING CULTURED CELLS
2y 5m to grant Granted Mar 24, 2026
Patent 12587604
IMAGE READING APPARATUS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
87%
With Interview (+13.2%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 707 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month