Prosecution Insights
Last updated: April 19, 2026
Application No. 18/939,051

Computer Vision Processing Circuitry

Non-Final OA §102§103
Filed
Nov 06, 2024
Examiner
WANG, XI
Art Unit
2637
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
98%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
440 granted / 523 resolved
+22.1% vs TC avg
Moderate +14% lift
Without
With
+13.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
15 currently pending
Career history
538
Total Applications
across all art units

Statute-Specific Performance

§101
2.6%
-37.4% vs TC avg
§103
47.1%
+7.1% vs TC avg
§102
35.8%
-4.2% vs TC avg
§112
9.9%
-30.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 523 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) document submitted on November 6, 2024 and June 5, 2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1,2, 3,5-10 are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Hirai (US Pub. No.: US 2021/0368062 A1). Regarding claim 1, Hirai discloses an electronic device (Para 48; digital camera 100) comprising: one or more sensors (Para 11;Para 66; Fig.3; image sensor of a sensor unit; sensor unit 106) configured to capture an image; computer vision processing circuitry (Para 70; A front engine 130 includes an image processing circuit 131 that processes image data obtained from the sensor unit 106, and the system control unit 132 that controls the operations of the digital camera 100 and the lens unit 150.) configured to receive the captured image and having a plurality of subsystems configured to operate in a first power domain (Para 83,109; first power supply domain 220 ; The power supply control unit 107 has individual power supply lines P220 to P222 to perform independent power supply control for the first to third power supply domains 220 to 222. Each of the power supply lines P220 to P222 is illustrated as a single wire for the sake of convenience.); and a back-end image signal processing pipeline coupled to the computer vision processing circuitry (Fig. 2; Para 74, 77; The main engine 140; The front engine 130 executes startup control of the main engine 140 in accordance with the operation mode of the digital camera 100. The main engine 140 includes an image processing circuit 141, a recording/playback unit 143, and a control unit 142 that controls the operations of the main engine 140. The image processing circuit 141 can apply image processing to unreduced image data obtained from the front engine 130, image data recorded in a storage device 160, and the like.) and configured to operate in a second power domain different than the first power domain (Para 87; Fig. 2; wherein the figure shows that main engine 140 is supported by second power supply domain 221; second power supply domain 221 which Is different than power supply domain 220). Regarding claim 2, Hirai discloses the electronic device of claim 1, further comprising: one or more displays configured to receive content for display from the back-end image signal processing pipeline ( Para 156; para 352; the main engine 140 can execute output control processing of outputting the display image data input from the front engine 130 to an external device via the communication unit 109. Image data displayed in the external device such as an external monitor or the like). Regarding claim 3, Hirai discloses the electronic device of claim 1, wherein the one or more sensors comprise: one or more outward-facing cameras configured to capture an image of an environment (Figs. 1A, 1B; camera lens are facing outward to capture images of objects; Para 48-50). Regarding claim 5, Hirai discloses the electronic device of claim 1, wherein: the computer vision processing circuitry is further configured to output a processed image in accordance with first image processing requirements ( Para 70-72; A front engine 130 includes an image processing circuit 131 that processes image data obtained from the sensor unit 106. The front engine 130 mainly handles image data reduction processing (resolution reduction) and image processing performed on reduced image data. the display image data generated by the image processing circuit 131 is image data for a live view display carried out in at least one of the display unit 101 and the EVF 108 (live view image data)) ; and the back-end image signal processing pipeline is further configured to output a processed image in accordance with second image processing requirements different than the first image processing requirements ( Para 143; The image processing circuit 141 of the main engine 140 generates the recording image data by applying, to the RAW data obtained from the front engine 130, development processing that has a higher level of quality than the front engine 130. The image processing circuit 141 can also apply development processing to the RAW data recorded in the storage device 160. Additionally, the recording/playback unit 143 records the recording image data generated by the image processing circuit 141 into the storage device 160, the recording medium 200, or the like.). Regarding claim 6, Hirai discloses the computer vision processing circuitry is configured to output a processed image having a first quality or using a first amount of power ( Para 70-72; A front engine 130 includes an image processing circuit 131 that processes image data obtained from the sensor unit 106. The front engine 130 mainly handles image data reduction processing (resolution reduction) and image processing performed on reduced image data. the display image data generated by the image processing circuit 131 is image data for a live view display carried out in at least one of the display unit 101 and the EVF 108 (live view image data)) ; and the back-end image signal processing pipeline is configured to output a processed image having a second quality greater than the first quality or using a second amount of power greater than the first amount of power (Para 143; The image processing circuit 141 of the main engine 140 generates the recording image data by applying, to the RAW data obtained from the front engine 130, development processing that has a higher level of quality than the front engine 130. The image processing circuit 141 can also apply development processing to the RAW data recorded in the storage device 160. Additionally, the recording/playback unit 143 records the recording image data generated by the image processing circuit 141 into the storage device 160, the recording medium 200, or the like). Regarding claim 7, Hirai discloses the electronic device of claim 5, wherein: the computer vision processing circuitry is configured to output a processed image by performing a first set of image processing operations ( ( Para 70-72; A front engine 130 includes an image processing circuit 131 that processes image data obtained from the sensor unit 106. The front engine 130 mainly handles image data reduction processing (resolution reduction) and image processing performed on reduced image data. the display image data generated by the image processing circuit 131 is image data for a live view display carried out in at least one of the display unit 101 and the EVF 108 (live view image data); wherein resolution reduction can be considered image processing) ; and the back-end image signal processing pipeline is configured to output a processed image by performing additional image processing operations different than the first set of image processing operations (Para 143; The image processing circuit 141 of the main engine 140 generates the recording image data by applying, to the RAW data obtained from the front engine 130, development processing that has a higher level of quality than the front engine 130. The image processing circuit 141 can also apply development processing to the RAW data recorded in the storage device 160. Additionally, the recording/playback unit 143 records the recording image data generated by the image processing circuit 141 into the storage device 160, the recording medium 200, or the like; developmental processing can be considered as additional image processing). Regarding claim 8, Hirai discloses the back-end image signal processing pipeline is selectively deactivated (Para 150-151; A recording unit 326 of the recording/playback unit 143 records a data file containing the coded image data generated by the compression unit 325 into the recording medium 200 using, for example, a method compliant with DCF (Design rule for Camera File system).In this manner, with the digital camera 100 according to the present embodiment, image processing pertaining to the live view display in the display unit 101 and the EVF 108 can be performed using the front engine 130, and it is not necessary to use the main engine 140. Para 156; On the other hand, in the restricted state, the main engine 140 cannot execute one or more of the above-described recording control processing, playback display control processing, and output control processing.) . Regarding claim 9, Hirai discloses The electronic device of claim 1, wherein the computer vision processing circuitry comprises: a sensor interface coupled to the one or more sensors (Para 94; GUI screens); a front-end processing subsystem configured to receive images from the sensor interface ( Para 94; the system control unit 132 reads out GUI data for the mode selection screen 700 from the ROM of the system memory 133. Then, the system control unit 132 supplies the GUI data to the image processing circuit 131 and causes the mode selection screen 700 to be displayed in the display unit 101.) ; a statistics pipeline configured to receive images from the front-end processing subsystem ( Para 75; The system memory 133 includes non-volatile memory (ROM) and volatile memory (RAM). Programs executed by the system control unit 132, setting values of the digital camera 100, GUI image data such as icons displayed along with menu screens and live view images, and the like are stored in the ROM. ); and a processing unit configured to coordinate operations of the sensor interface, the front-end processing subsystem, and the statistics pipeline ( Para 77; The image processing circuit 141 can apply image processing to unreduced image data obtained from the front engine 130, image data recorded in a storage device 160, and the like.) . Regarding claim 10, Hirai discloses the electronic device of claim 1, further comprising: a first client processor coupled to the computer vision processing circuitry and configured to execute a first set of algorithms (Para 70-72; A front engine 130 includes an image processing circuit 131 that processes image data obtained from the sensor unit 106. The front engine 130 mainly handles image data reduction processing (resolution reduction) and image processing performed on reduced image data. the display image data generated by the image processing circuit 131 is image data for a live view display carried out in at least one of the display unit 101 and the EVF 108 (live view image data); wherein resolution reduction can be considered image processing ) ; and a second client processor coupled to the computer vision processing circuitry and configured to execute a second set of algorithms different than the first set of algorithms (Para 143; The image processing circuit 141 of the main engine 140 generates the recording image data by applying, to the RAW data obtained from the front engine 130, development processing that has a higher level of quality than the front engine 130. The image processing circuit 141 can also apply development processing to the RAW data recorded in the storage device 160. Additionally, the recording/playback unit 143 records the recording image data generated by the image processing circuit 141 into the storage device 160, the recording medium 200, or the like; developmental processing can be considered as additional image processing). Claims 11, 14, 15 are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Gai et al. (US Pub. No.: US 2010/0194920 A1). Regarding claim 11, Gai et al. discloses a method of operating an electronic device (Para 39; operations flow on a digital camera), comprising: with at least one client (Para 78; user issues operation request) running on the electronic device, outputting image requests; with a multiclient scheduler, receiving the image requests from the at least one client and feeding the image requests into a queue ( Figs. 6A, 6B; Para 67; the application ranks each task's importance 503 as well as its instantaneous priority 504. Using this information, the system predicts user's next possible moves and dynamically prioritizes tasks in the task queue to further accelerate the user feedback on the camera operation request.) ; with the multiclient scheduler, reordering at least some of the image requests in the queue ( Para 78; When the user issues the operation request 204, the application dynamically schedules the necessary tasks 213 according to its current priority table 600. The resulting dynamically-prioritized task queue 612 optimized to the user's behavior. Upon the image capture request, the application follows this queue to first processes C_1 to C_3 604 of the Initial Capture steps 603, then steps T_1 to T4 of Thumbnail Preview steps 605. A User Feedback 611 is generated at this point to preview the captured image.) ; and fulfilling at least some of the image requests in the queue by directing one or more image sensors in the electronic device to capture an image ( Para 77,78; each main task 601 is made up of smaller sub-tasks 602. As an example, an Initial Capture 603 task is broken down into 3 smaller sub-tasks, which are "Resets Camera Image Sensor", "Wait T Milliseconds" and "Read Camera Image Sensor Value Into Memory". Dynamically-prioritized queuing can be done on any level. Upon the image capture request, the application follows this queue to first processes C_1 to C_3 604 of the Initial Capture steps 603, then steps T_1 to T4 of Thumbnail Preview steps 605. A User Feedback 611 is generated at this point to preview the captured image. Prior to completing the remaining post-processing steps, the application first pre-computes grayscale preview G_1 and G_2 607 of the captured image in anticipation for a likely grayscale conversion request.). Regarding claim 14, Gai et al. discloses the method of claim 11, wherein at least some of the image requests in the queue based on deadlines or timing requirements specified in the image requests (Para 54; he proposed method processes only the mandatory time-critical tasks and postpones the time-uncritical tasks for later. Depending on the user's requests, the proposed method intelligently finds the best time to complete the remaining tasks. An example of time-critical tasks is reading of the image sensor value, which must be done as soon as possible. An example such time-uncritical tasks is compressing the raw image values into a smaller file, which can be done at anytime as long as the raw image values are available.). Regarding claim 15, Gai et al. discloses wherein reordering at least some of the image requests in the queue comprises reordering at least some of the image requests in the queue based on priority levels associated with the image requests (Para 67; the application ranks each task's importance 503 as well as its instantaneous priority 504. Using this information, the system predicts user's next possible moves and dynamically prioritizes tasks in the task queue to further accelerate the user feedback on the camera operation request.). Claims 19, 22, 23 rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Beatch (US Patent. No.: US 10,839,856 B2). Regarding claim 19, Beatch discloses a method of operating an electronic device (Col 4, lines 4-30; Col 6, lines 7-25; system 100 includes one or more cameras 104 in communication with a computing system 102 via a source network 106 and one or more destination devices 118 in communication with the computing system 102 via a distribution network 116. Wherein cameras can be configured) having one or more sensors, the method comprising: executing one or more clients on the electronic device (Col 7; Lines 10-20; user input received via controller 114 ) ; with a multiclient scheduler coupled to the one or more clients, receiving an image request from a client in the one or more clients (Col 4, lines 4-40; Col 7, lines 22-35, 40-50; in response to receiving user input, the computing system executes an automated (or at least semi-automated) video processing procedure ; once the one or more received live video feeds are selected for processing, those feeds are received by a video processing engine 108 that processes the received live video feeds in one or more ways. the computing system obtains a calibration video segment from each camera associated with the computing system, compares the calibration video segment to a corresponding reference video segment to evaluate the level of consistency between the video segments, and then configures each camera so that video received from each camera has a consistent look and feel when initially received for processing from its respective camera ; wherein to create images with a consistent look can be considered as image request) ; and determining whether the image request can be satisfied by an existing image currently stored on an image server within the electronic device before triggering the one or more sensors to capture a new image (Col 4, lines 4-40; the computing system obtains a calibration video segment from each camera associated with the computing system, compares the calibration video segment to a corresponding reference video segment to evaluate the level of consistency between the video segments, and then configures each camera so that video received from each camera has a consistent look and feel when initially received for processing from its respective camera. the computing system automatically (or at least semi-automatically) adjusts the field of view for each (or at least some) of the associated cameras, by comparing the field of view for each camera to a corresponding reference field of view, and adjusting each camera's field of view to improve the likelihood that each camera will capture footage that includes the participants. Since the camera parameters will be adjusted to capture images again after comparing current images with existing/reference images in order to create the consistent look, the image request of creating a consistent look can be satisfied after re-capturing images with modified parameters (triggering camera (sensor) to capture more images), in other words, the images to be captured can be considered satisfied after parameters being updated prior to triggering the camera/sensor.) . Regarding claim 22, Beatch discloses the method of claim 19, wherein the received image request comprises requirements that specify one or more of: a timing or deadline requirement, an image resolution, an exposure level, a camera type, and a number of consecutive frames to capture ( Col 5, lines 3-22; the system in some embodiments can calibrate and adjust an individual camera's video parameters (e.g., brightness, contrast, grading, saturation, color balance, image filter parameters, and/or other parameters) periodically or semi-periodically, as described herein. Wherein the image request of having a consistent look can include parameters adjusting as listed above ) . Regarding claim 23, Beatch discloses the method of claim 22, wherein determining whether the image request can be satisfied by an existing image currently stored on the image server comprises determining whether an existing image satisfies at least some of the requirements specified in the received image request ( Col 4, lines 5-35; compares the calibration video segment to a corresponding reference video segment to evaluate the level of consistency between the video segments; wherein level of consistency comparison can include brightness, contrast, saturation, color balance comparison). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Hirai (US Pub. No.: US 2021/0368062 A1), in view of Koci (US Pub. No.: US 2021/0006706 A1). Regarding claim 4, Hirai does not disclose one or more inward-facing cameras configured to capture an image of an eye. Koci discloses one or more inward-facing cameras configured to capture an image of an eye ( Para 38; the selfie camera 102 for capturing selfie content; wherein selfie can include portion of subject’s eye) . It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Hirai with the teaching of Koci to include additional selfie camera in order to capture and provide images of the user while he or she is capturing images or operating the camera in order to provide option for video conferencing, enhanced vlogging with real-time reaction capture while reducing post-production editing. Allowable Subject Matter Claims 12,13,16,17,18, 20,21 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Regarding claim 12, Gai et al. discloses with the at least one client, running a first set of algorithms (Figs. 6A, 6B; Para 67; the application ranks each task's importance 503 as well as its instantaneous priority 504. Using this information, the system predicts user's next possible moves and dynamically prioritizes tasks in the task queue to further accelerate the user feedback on the camera operation request ). However, the prior art does not disclose “ with the at least one client, running a first set of algorithms including a gaze tracking algorithm; and with an additional client running on the electronic device or on an additional electronic device separate from the electronic device, running a second set of algorithms different than the first set of algorithms” in combination of other limitation in the claim. Claims 13 is objected as being dependent from claim 12. Regarding claim 16, Gai et al. discloses reordering at least some of the image requests in the queue (Figs. 6A, 6B; Para 67; the application ranks each task's importance 503 as well as its instantaneous priority 504. Using this information, the system predicts user's next possible moves and dynamically prioritizes tasks in the task queue to further accelerate the user feedback on the camera operation request ). However, the prior art does not disclose “wherein reordering at least some of the image requests in the queue comprises reordering at least some of the image requests in the queue based on whether the image requests are associated with a user-facing algorithm or a non-user-facing algorithm” in combination of other limitation in the claim. Regarding claim 17, Gai et al. discloses the image requests (Para 78; user issues operation request). However, the prior art does not disclose “ with the multiclient scheduler, determining whether at least two of the received image requests can be satisfied by a single image capture; and in response to determining that the at least two of the received image requests can be satisfied by a single image capture, coalescing the at least two of the received image requests into a single image request. Regarding claim 18, Gai et al. discloses the multiclient scheduler (Figs. 6A, 6B; Para 67; the application ranks each task's importance 503 as well as its instantaneous priority 504.). However, the prior art does not disclose “with the multiclient scheduler, triggering an image capture and returning a corresponding result to the at least one client; and with the multiclient scheduler, returning the result to an additional client, different than the at least one client, without triggering another image capture: in combination of other limitation in the claim and its base claim. Regarding claim 20, Beatch discloses whether the image request can be satisfied by an existing image currently stored on the image server (Col 4, lines 4-40; Col 7, lines 22-35, 40-50; in response to receiving user input, the computing system executes an automated (or at least semi-automated) video processing procedure ; once the one or more received live video feeds are selected for processing, those feeds are received by a video processing engine 108 that processes the received live video feeds in one or more ways. the computing system obtains a calibration video segment from each camera associated with the computing system, compares the calibration video segment to a corresponding reference video segment to evaluate the level of consistency between the video segments, and then configures each camera so that video received from each camera has a consistent look and feel when initially received for processing from its respective camera ; wherein to create images with a consistent look can be considered as image request). However, the prior art does not disclose “in response to determining to that the image request can be satisfied by an existing image currently stored on the image server, returning a pointer to the existing image to the client; and in response to determining to that the image request cannot be satisfied by an existing image currently stored on the image server, triggering the one or more sensors to capture a new image” in combination of other limitation in the claim. Regarding claim 21, Beatch discloses whether the image request can be satisfied by an existing image currently stored on the image server (Col 4, lines 4-40; Col 7, lines 22-35, 40-50; in response to receiving user input, the computing system executes an automated (or at least semi-automated) video processing procedure ; once the one or more received live video feeds are selected for processing, those feeds are received by a video processing engine 108 that processes the received live video feeds in one or more ways. the computing system obtains a calibration video segment from each camera associated with the computing system, compares the calibration video segment to a corresponding reference video segment to evaluate the level of consistency between the video segments, and then configures each camera so that video received from each camera has a consistent look and feel when initially received for processing from its respective camera ; wherein to create images with a consistent look can be considered as image request). However, the prior art does not disclose “ in response to determining to that the image request can be satisfied by an existing image currently stored on the image server, increasing a reference count for the existing image” in combination of other limitation in the claim. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to XI WANG whose telephone number is 469-295-9155. The examiner can normally be reached on 9:00 am-5:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SINH TRAN can be reached on 571-272-7564. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XI WANG/Primary Examiner, Art Unit 2637
Read full office action

Prosecution Timeline

Nov 06, 2024
Application Filed
Feb 06, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602078
PORTABLE INFORMATION HANDLING SYSTEM PERIPHERAL CAMERA AND DOCK WITH CONTOURED WIRELESS CHARGING
2y 5m to grant Granted Apr 14, 2026
Patent 12604091
VIDEO RECORDING METHOD AND ELECTRONIC DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12598393
IMAGE CAPTURE DEVICE WITH INTERCHANGEABLE INTEGRATED SENSOR-OPTICAL COMPONENT ASSEMBLIES
2y 5m to grant Granted Apr 07, 2026
Patent 12596292
ZOOM DRIVING ACTUATOR AND POSITION CONTROL METHOD FOR ZOOM DRIVING
2y 5m to grant Granted Apr 07, 2026
Patent 12587734
DISPLAY CONTROL APPARATUS, DISPLAY CONTROL METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
98%
With Interview (+13.9%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 523 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month