Prosecution Insights
Last updated: April 19, 2026
Application No. 18/602,119

ARCHITECTURE FOR MICROCONTROLLER-BASED PLUG AND PLAY EDGE AI USB CAMERA

Non-Final OA §103
Filed
Mar 12, 2024
Examiner
PATEL, NIMESH G
Art Unit
2176
Tech Center
2100 — Computer Architecture & Software
Assignee
E-Con Systems India Private Limited
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
84%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
551 granted / 717 resolved
+21.8% vs TC avg
Moderate +8% lift
Without
With
+7.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
22 currently pending
Career history
739
Total Applications
across all art units

Statute-Specific Performance

§101
3.3%
-36.7% vs TC avg
§103
47.5%
+7.5% vs TC avg
§102
28.9%
-11.1% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 717 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fedyk(US 2025/0240335) and Tran(US 2020/0186751). Regarding claim 1, Fedyk discloses a system of a universal serial bus (USB) communicating camera comprising: an integrated USB camera with a sensor and a first USB interface(Paragraph 46, universal serial bus (USB) camera plugged into a port of the client device 102A), a host system with a processor(Paragraph 46, client device 102A), a user interface application(Paragraph 25, the respective UIs 106A-106N can be displayed on the display devices 107A-107N by the client applications 105A-N executing on the operating systems of the client devices 102A-102N, 104) and a second USB interface to communicate to and from the integrated USB camera(Paragraph 46, universal serial bus (USB) camera plugged into a port of the client device 102A), the system is user configurable for application specific pre-trained machine learning models and inferencing captured frames(Paragraph 45, the background manager 138 may obtain a frame from the video stream associated with the first region 302 and input the frame into an image recognition AI model of the AI subsystem 139. The image recognition AI model may determine, from the input frame, whether the participant's surroundings (as contained in the frame) are cluttered, poorly lit, aesthetically unpleasing, or some other characteristic. In some implementations, the processing logic may modify the background of the first region 302A by default, and the first participant may deactivate the background enhancement feature by interacting with the background enhancement UI element 312; Paragraph 61, In some implementations, the selection engine 418 may receive input from another AI model or a human and may select a trained AI model based on the input.). Fedyk does not specifically disclose the integrated camera including and an image signal processor, integrated with a microcontroller having an image processing module, an inference unit and a flash memory. However, Tran discloses a camera with an image processor with integrated with a microcontroller having an image processing module, an inference unit and a memory(Paragraph 110, , image processor; Figure 1B, CPU, Video analyzer, memory). Flash memory is well known type of memory. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have the integrated camera including and an image signal processor, integrated with a microcontroller having an image processing module, an inference unit and a flash memory. The motivation to do so would be to use a well-known memory to increase compatibility and ease of implementation and free the host from extra processing . Regarding claim 2, Fedyk discloses a system of the USB communicating camera as claimed in claim 1, wherein the microcontroller of the integrated USB camera and the processor of the host computer execute the user interface application by a configuration and inference computer program(Paragraph 45, the background manager 138 may obtain a frame from the video stream associated with the first region 302 and input the frame into an image recognition AI model of the AI subsystem 139. The image recognition AI model may determine, from the input frame, whether the participant's surroundings (as contained in the frame) are cluttered, poorly lit, aesthetically unpleasing, or some other characteristic. In some implementations, the processing logic may modify the background of the first region 302A by default, and the first participant may deactivate the background enhancement feature by interacting with the background enhancement UI element 312). Regarding claim 3, Fedyk discloses a system of the USB communicating camera as claimed in claim 1, wherein the user interface application is adapted to receive an application specific configuration data by a user(Paragraph 45, the background manager 138 may obtain a frame from the video stream associated with the first region 302 and input the frame into an image recognition AI model of the AI subsystem 139. The image recognition AI model may determine, from the input frame, whether the participant's surroundings (as contained in the frame) are cluttered, poorly lit, aesthetically unpleasing, or some other characteristic. In some implementations, the processing logic may modify the background of the first region 302A by default, and the first participant may deactivate the background enhancement feature by interacting with the background enhancement UI element 312). Regarding claim 4, Fedyk discloses a system of the USB communicating camera as claimed in claim 3, wherein the application specific configuration data is converted to a protocol buffer (protobuf) file that associates data types with field names, using integers to identify each field(Paragraph 45, the background manager 138 may obtain a frame from the video stream associated with the first region 302 and input the frame into an image recognition AI model of the AI subsystem 139. The image recognition AI model may determine, from the input frame, whether the participant's surroundings (as contained in the frame) are cluttered, poorly lit, aesthetically unpleasing, or some other characteristic. In some implementations, the processing logic may modify the background of the first region 302A by default, and the first participant may deactivate the background enhancement feature by interacting with the background enhancement UI element 312). Regarding claim 5, Fedyk discloses a system of the USB communicating camera as claimed in claim 1, wherein the inference unit is an artificial intelligence inference unit that draws inference on an image frame based on an input and the corresponding pre-trained machine learning model and the application specific configuration of the integrated USB camera(Paragraph 45, the background manager 138 may obtain a frame from the video stream associated with the first region 302 and input the frame into an image recognition AI model of the AI subsystem 139. The image recognition AI model may determine, from the input frame, whether the participant's surroundings (as contained in the frame) are cluttered, poorly lit, aesthetically unpleasing, or some other characteristic. In some implementations, the processing logic may modify the background of the first region 302A by default, and the first participant may deactivate the background enhancement feature by interacting with the background enhancement UI element 312). Regarding claim 6, Fedyk discloses a system of the USB communicating camera as claimed in claim 1, wherein the microcontroller of the integrated USB camera sends a raw model output to the host system for post processing(Paragraph 45, the background manager 138 may obtain a frame from the video stream associated with the first region 302 and input the frame into an image recognition AI model of the AI subsystem 139. The image recognition AI model may determine, from the input frame, whether the participant's surroundings (as contained in the frame) are cluttered, poorly lit, aesthetically unpleasing, or some other characteristic. In some implementations, the processing logic may modify the background of the first region 302A by default, and the first participant may deactivate the background enhancement feature by interacting with the background enhancement UI element 312). Regarding claim 7, Fedyk discloses a system of the USB communicating camera of claim 1, deploys edge computing(Figure 1, Client 102A). Regarding claim 8, Fedyk discloses a system of the USB communicating camera as claimed in claim 1, wherein the flash memory of the integrated USB camera has a firmware residing therein that supports different application specific pre-trained machine learning models with correspondingly different configuration data co-residing therein(Paragraph 61, the selection engine 418 may be capable of selecting a trained AI model 430A-M that has an accuracy that meets a threshold accuracy. In some embodiments, the selection engine 418 may be capable of selecting the trained AI model that has the highest accuracy of multiple trained AI models 430A-M. In some implementations, the selection engine 418 may receive input from another AI model or a human and may select a trained AI model based on the input.) Regarding claim 9, Fedyk discloses a system of the USB communicating camera as claimed in claim 8, wherein the firmware remains unchanged when different application specific pre-trained machine learning models are uploaded in the flash memory(Paragraph 61, the selection engine 418 may be capable of selecting a trained AI model 430A-M that has an accuracy that meets a threshold accuracy. In some embodiments, the selection engine 418 may be capable of selecting the trained AI model that has the highest accuracy of multiple trained AI models 430A-M. In some implementations, the selection engine 418 may receive input from another AI model or a human and may select a trained AI model based on the input.) Regarding claim 10, Fedyk discloses a system of the USB communicating camera as claimed in claim 1, wherein the host system sends a different application specific pre-trained model with the correspondingly different configuration data and wherein the integrated USB camera is rebooted to infer captured frames based on a different input of the different application specific pre-trained model(Paragraph 61, the selection engine 418 may be capable of selecting a trained AI model 430A-M that has an accuracy that meets a threshold accuracy. In some embodiments, the selection engine 418 may be capable of selecting the trained AI model that has the highest accuracy of multiple trained AI models 430A-M. In some implementations, the selection engine 418 may receive input from another AI model or a human and may select a trained AI model based on the input.) Regarding claim 11, Fedyk discloses a system of the USB communicating camera of claim 1 has a configuration mode and an inference mode, wherein, the configuration mode enables a user based configuration for different application specific pre-trained machine learning models, while the inference mode enables a corresponding raw model output generation from the inference unit and a post processing of the raw model output by the host system(Paragraph 61, the selection engine 418 may be capable of selecting a trained AI model 430A-M that has an accuracy that meets a threshold accuracy. In some embodiments, the selection engine 418 may be capable of selecting the trained AI model that has the highest accuracy of multiple trained AI models 430A-M. In some implementations, the selection engine 418 may receive input from another AI model or a human and may select a trained AI model based on the input.) Regarding claim 12, Fedyk discloses a system of the USB communicating camera as claimed in claim 10, wherein the configuration mode is adapted to provide options of a resolution setting including a custom resolution setting, with a default resolution setting(Paragraph 70, As indicated above, a user can interact with the prompt subsystem via a prompt interface. The prompt interface may include a UI element that can support any suitable types of user inputs (e.g., textual inputs, speech inputs, image inputs, etc.). The UI element may further support any suitable types of outputs (e.g., textual outputs, speech outputs, image outputs, etc.). In some embodiments, the UI 106A-N may include the UI element of the prompt subsystem. The UI element can include selectable items, in some embodiments, that enables a user to select from multiple possible inputs. The UI element can allow the user to provide consent for the prompt subsystem or the generative AI model to access user data or other data associated with a client device 102A-N or stored in the data store 140, process, or store new data received from the user, and the like. The UI element can additionally or alternatively allow the user to withhold consent to provide access to user data. In some embodiments, user input entered using the UI element may be communicated to the prompt subsystem by a user API). Regarding claim 13, Fedyk discloses a system of the USB communicating camera as claimed in claim 10, wherein the configuration mode is adapted to select video capturing formats and still picture capturing formats including one of a single still capture and a continuous still capture(Paragraph 70, As indicated above, a user can interact with the prompt subsystem via a prompt interface. The prompt interface may include a UI element that can support any suitable types of user inputs (e.g., textual inputs, speech inputs, image inputs, etc.). The UI element may further support any suitable types of outputs (e.g., textual outputs, speech outputs, image outputs, etc.). In some embodiments, the UI 106A-N may include the UI element of the prompt subsystem. The UI element can include selectable items, in some embodiments, that enables a user to select from multiple possible inputs. The UI element can allow the user to provide consent for the prompt subsystem or the generative AI model to access user data or other data associated with a client device 102A-N or stored in the data store 140, process, or store new data received from the user, and the like. The UI element can additionally or alternatively allow the user to withhold consent to provide access to user data. In some embodiments, user input entered using the UI element may be communicated to the prompt subsystem by a user API). Regarding claim 14, Fedyk discloses a system of the USB communicating camera as claimed in claim 10, wherein the configuration mode is adapted to set or limit a region of interest (ROI) (Paragraph 70, As indicated above, a user can interact with the prompt subsystem via a prompt interface. The prompt interface may include a UI element that can support any suitable types of user inputs (e.g., textual inputs, speech inputs, image inputs, etc.). The UI element may further support any suitable types of outputs (e.g., textual outputs, speech outputs, image outputs, etc.). In some embodiments, the UI 106A-N may include the UI element of the prompt subsystem. The UI element can include selectable items, in some embodiments, that enables a user to select from multiple possible inputs. The UI element can allow the user to provide consent for the prompt subsystem or the generative AI model to access user data or other data associated with a client device 102A-N or stored in the data store 140, process, or store new data received from the user, and the like. The UI element can additionally or alternatively allow the user to withhold consent to provide access to user data. In some embodiments, user input entered using the UI element may be communicated to the prompt subsystem by a user API). Regarding claim 15, Fedyk discloses a system of the USB communicating camera as claimed in claim 14, wherein the ROI having a different aspect ratio of a ROI width and a ROI height adapts to an aspect ratio of the custom resolution(Paragraph 70, As indicated above, a user can interact with the prompt subsystem via a prompt interface. The prompt interface may include a UI element that can support any suitable types of user inputs (e.g., textual inputs, speech inputs, image inputs, etc.). The UI element may further support any suitable types of outputs (e.g., textual outputs, speech outputs, image outputs, etc.). In some embodiments, the UI 106A-N may include the UI element of the prompt subsystem. The UI element can include selectable items, in some embodiments, that enables a user to select from multiple possible inputs. The UI element can allow the user to provide consent for the prompt subsystem or the generative AI model to access user data or other data associated with a client device 102A-N or stored in the data store 140, process, or store new data received from the user, and the like. The UI element can additionally or alternatively allow the user to withhold consent to provide access to user data. In some embodiments, user input entered using the UI element may be communicated to the prompt subsystem by a user API). Regarding claim 16, Fedyk discloses a system of the USB communicating camera as claimed in claim 10, wherein the inference mode is adapted to trigger the integrated USB camera start inferencing the frames and output raw model to the user interface application(Paragraph 70, As indicated above, a user can interact with the prompt subsystem via a prompt interface. The prompt interface may include a UI element that can support any suitable types of user inputs (e.g., textual inputs, speech inputs, image inputs, etc.). The UI element may further support any suitable types of outputs (e.g., textual outputs, speech outputs, image outputs, etc.). In some embodiments, the UI 106A-N may include the UI element of the prompt subsystem. The UI element can include selectable items, in some embodiments, that enables a user to select from multiple possible inputs. The UI element can allow the user to provide consent for the prompt subsystem or the generative AI model to access user data or other data associated with a client device 102A-N or stored in the data store 140, process, or store new data received from the user, and the like. The UI element can additionally or alternatively allow the user to withhold consent to provide access to user data. In some embodiments, user input entered using the UI element may be communicated to the prompt subsystem by a user API). Regarding claim 17, Fedyk discloses a system of the USB communicating camera as claimed in claim 10, wherein the raw model output is sent from the integrated USB camera to the host system as data packets(Paragraph 70, As indicated above, a user can interact with the prompt subsystem via a prompt interface. The prompt interface may include a UI element that can support any suitable types of user inputs (e.g., textual inputs, speech inputs, image inputs, etc.). The UI element may further support any suitable types of outputs (e.g., textual outputs, speech outputs, image outputs, etc.). In some embodiments, the UI 106A-N may include the UI element of the prompt subsystem. The UI element can include selectable items, in some embodiments, that enables a user to select from multiple possible inputs. The UI element can allow the user to provide consent for the prompt subsystem or the generative AI model to access user data or other data associated with a client device 102A-N or stored in the data store 140, process, or store new data received from the user, and the like. The UI element can additionally or alternatively allow the user to withhold consent to provide access to user data. In some embodiments, user input entered using the UI element may be communicated to the prompt subsystem by a user API). Regarding claim 18, Fedyk discloses a system of the USB communicating camera as claimed in claim 17, wherein the raw model output is a python list of output tensors(Paragraph 70, As indicated above, a user can interact with the prompt subsystem via a prompt interface. The prompt interface may include a UI element that can support any suitable types of user inputs (e.g., textual inputs, speech inputs, image inputs, etc.). The UI element may further support any suitable types of outputs (e.g., textual outputs, speech outputs, image outputs, etc.). In some embodiments, the UI 106A-N may include the UI element of the prompt subsystem. The UI element can include selectable items, in some embodiments, that enables a user to select from multiple possible inputs. The UI element can allow the user to provide consent for the prompt subsystem or the generative AI model to access user data or other data associated with a client device 102A-N or stored in the data store 140, process, or store new data received from the user, and the like. The UI element can additionally or alternatively allow the user to withhold consent to provide access to user data. In some embodiments, user input entered using the UI element may be communicated to the prompt subsystem by a user API). Regarding claim 19, Fedyk discloses a system of the USB communicating camera as claimed in claim 10, wherein the inference mode is adapted to trigger on the host system a post processing of raw model into interpretable data including bounding boxes, labels, scores(Paragraph 70, As indicated above, a user can interact with the prompt subsystem via a prompt interface. The prompt interface may include a UI element that can support any suitable types of user inputs (e.g., textual inputs, speech inputs, image inputs, etc.). The UI element may further support any suitable types of outputs (e.g., textual outputs, speech outputs, image outputs, etc.). In some embodiments, the UI 106A-N may include the UI element of the prompt subsystem. The UI element can include selectable items, in some embodiments, that enables a user to select from multiple possible inputs. The UI element can allow the user to provide consent for the prompt subsystem or the generative AI model to access user data or other data associated with a client device 102A-N or stored in the data store 140, process, or store new data received from the user, and the like. The UI element can additionally or alternatively allow the user to withhold consent to provide access to user data. In some embodiments, user input entered using the UI element may be communicated to the prompt subsystem by a user API). Regarding claim 20, Fedyk discloses a configuration and inference computer program and a corresponding user interface application, residing in a system of a universal serial bus (USB) communicating camera, comprising an integrated USB camera with a sensor and an image signal processor, integrated with a microcontroller having an image processing module, an inference unit and a USB interface, and a flash memory, a host system with a processor, a user interface application and a USB interface to communicate to and from the integrated USB camera(Paragraph 45, the background manager 138 may obtain a frame from the video stream associated with the first region 302 and input the frame into an image recognition AI model of the AI subsystem 139. The image recognition AI model may determine, from the input frame, whether the participant's surroundings (as contained in the frame) are cluttered, poorly lit, aesthetically unpleasing, or some other characteristic. In some implementations, the processing logic may modify the background of the first region 302A by default, and the first participant may deactivate the background enhancement feature by interacting with the background enhancement UI element 312; Paragraph 61, In some implementations, the selection engine 418 may receive input from another AI model or a human and may select a trained AI model based on the input.), the configuration and inference computer program and the corresponding user interface application is executed comprising the steps of: a) Uploading a prescribed pre-trained machine learning model to the user interface application b) Inputting through the user interface application a plurality of configurable parameters including model data, region of interest data, user data c) Generating a configuration file in a protocol buffer format d) Saving the configuration file and the model to the flash memory of the integrated USB camera e) Rebooting the integrated USB camera causing the model and the configuration file get updated in the inference unit f) Receiving frames from the sensor of the integrated USB camera g) Preprocessing the frame in the inference unit h) Running model inference on a pre-processed frame i) Sending a raw model output to a post processing model residing on the host system j) Drawing an overlay according to a user defined post processing script k) Displaying a preview(Paragraph 70, As indicated above, a user can interact with the prompt subsystem via a prompt interface. The prompt interface may include a UI element that can support any suitable types of user inputs (e.g., textual inputs, speech inputs, image inputs, etc.). The UI element may further support any suitable types of outputs (e.g., textual outputs, speech outputs, image outputs, etc.). In some embodiments, the UI 106A-N may include the UI element of the prompt subsystem. The UI element can include selectable items, in some embodiments, that enables a user to select from multiple possible inputs. The UI element can allow the user to provide consent for the prompt subsystem or the generative AI model to access user data or other data associated with a client device 102A-N or stored in the data store 140, process, or store new data received from the user, and the like. The UI element can additionally or alternatively allow the user to withhold consent to provide access to user data. In some embodiments, user input entered using the UI element may be communicated to the prompt subsystem by a user API). Fedyk does not specifically disclose the integrated camera including and an image signal processor, integrated with a microcontroller having an image processing module, an inference unit and a flash memory. However, Tran discloses a camera with an image processor with integrated with a microcontroller having an image processing module, an inference unit and a memory(Paragraph 110, , image processor; Figure 1B, CPU, Video analyzer, memory). Flash memory is well known type of memory. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have the integrated camera including and an image signal processor, integrated with a microcontroller having an image processing module, an inference unit and a flash memory. The motivation to do so would be to use a well-known memory to increase compatibility and ease of implementation and free the host from extra processing. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NIMESH G PATEL whose telephone number is (571)272-3640. The examiner can normally be reached Monday-Friday, 8:15-4:15. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jaweed Abbaszadeh can be reached on 571-270-1640. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NIMESH G PATEL/Primary Examiner, Art Unit 2187
Read full office action

Prosecution Timeline

Mar 12, 2024
Application Filed
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12565145
IN-VEHICLE APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
2y 5m to grant Granted Mar 03, 2026
Patent 12561278
CONTEXTUAL NOISE SUPPRESSION AND ACOUSTIC CONTEXT AWARENESS (ACA) DURING A COLLABORATION SESSION IN A HETEROGENOUS COMPUTING PLATFORM
2y 5m to grant Granted Feb 24, 2026
Patent 12554666
SCALABLE AND CONFIGURABLE NON-TRANSPARENT BRIDGES
2y 5m to grant Granted Feb 17, 2026
Patent 12554312
ADAPTIVE POWER SAVE MODE FOR A TOUCH CONTROLLER
2y 5m to grant Granted Feb 17, 2026
Patent 12530071
DYNAMICALLY SHUTTING DOWN COMPONENTS BASED ON OPERATION
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
84%
With Interview (+7.5%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 717 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month