Prosecution Insights
Last updated: April 19, 2026
Application No. 18/531,826

DOMAIN ADAPTATION VIA NETWORK CALIBRATION

Non-Final OA §103
Filed
Dec 07, 2023
Examiner
TUCKER, WESLEY J
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Cisco Technology Inc.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
90%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
596 granted / 715 resolved
+21.4% vs TC avg
Moderate +6% lift
Without
With
+6.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
19 currently pending
Career history
734
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
35.7%
-4.3% vs TC avg
§102
39.4%
-0.6% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 715 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5-7, 11-13, 15-17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of USPNs 2024/0119360 to Lim et al. and 2024/0143645 to Adeel et al. With regard to claim 1, Lim discloses a method comprising: receiving, at a device [and via a user interface, a selection of] a machine learning model trained to perform a video analytics task using a training dataset (paragraphs [0016]-[0017], Lim discloses that a machine learning model for identifying objects in streaming video is used and that the training data is from a source domain); obtaining, by the device, video data from a target environment that is not represented in the training dataset (paragraphs [0017]-[0020], Lim discloses that additional input data is gathered from a target domain or environment different than the source domain training data); performing, by the device and using the video data, network calibration on the machine learning model to form a domain-adapted model (paragraphs [0021]-[0027] and [0041]-[0046], Lim describes network calibration in the form of adjusting weights of the layers of the network according to the target or shifted domain in order to adapt the network model to the shifted/target domain); and causing, by the device, the domain-adapted model to be deployed to perform the video analytics task with respect to the target environment (paragraph [0041] “…The operations 400 can be performed, for example, by a computing system, such as a user equipment (UE) or other computing device, such as that illustrated in FIG. 5, on which a pre-trained machine learning model can be adjusted prior to deployment (e.g., during or after training) or on which a machine learning model is deployed.” The domain adapted model for the domain-shifted data is deployed for performing the task in the target/shifted domain). Lim discloses that the method is implemented on a user device (paragraphs [0016] and [0041]), but does not explicitly teach the step of receiving a user selection of a machine learning model. Adeel teaches an image analysis system that uses machine learning to process and further teaches “using a user interface page provided by the user application 108, the user 112 may select the single machine-learning model that is trained to detect various items to process the multimedia file” (paragraph [0017]). Therefore it would have been obvious to one of ordinary skill in the art before time of filing to allow a user to select a machine learning model as taught by Adeel in combination with the machine learning model use of Lim in order to allow a user selection of a machine learning model. With regard to claim 2, Lim discloses the method as in claim 1, wherein the video analytics task comprises at least one of: image classification (paragraph [0058]), object re-identification, or object detection (paragraphs [0003], [0016], [0053] and [0058], Lim teaches object detection and image classification). With regard to claim 3, Lim discloses the method as in claim 1, wherein causing the domain-adapted model to be deployed to perform the video analytics task with respect to the target environment comprises: providing the domain-adapted model to an edge device in the target environment for execution (paragraphs [0003]-[0004], [0017] and [0041], Lim discloses that the domain adapted machine learning model is deployed to a target environment and gives the example of autonomous driving, in which case the edge device would be the processor in car or an end user equipment device). With regard to claim 5, Lim discloses the method as in claim 1, wherein the video data from the target environment that is not represented in the training dataset depicts a feature not depicted in the training dataset (paragraphs [0004], [0017], Lim gives examples of environment conditions for the shifted domain such as urban/rural environment, weather conditions, lighting, i.e. bright versus dim, as well as blurring and noise). With regard to claim 6, Lim discloses the method as in claim 5, wherein the feature comprises at least one of: a camera angle, a lighting condition, a cosmetic style of a type of object, or an image background (paragraphs [0004], [0017], Lim gives examples of environment conditions for the shifted domain such as urban/rural environment/background, weather conditions, lighting, i.e. bright versus dim, as well as blurring and noise. With regard to claim 7, Lim discloses the method as in claim 1, wherein performing network calibration on the machine learning model to form the domain-adapted model comprises: determining an amount of distribution shift between the video data from the target environment and the training dataset (paragraphs [0018], “…The magnitude of the performance reduction may be related to the magnitude of the shift between the source data set and the data input into the machine learning model at inference time. Generally, smaller domain shifts between the source data set and the data input into the machine learning model at inference time may result in better inference performance than larger domain between the source data set and the data input into the machine learning model at inference time.”). With regard to claim 11, the discussion of claim 1 applies. Lim discloses an apparatus (Fig. 5, 500) with a network interface (512, 514 wireless connectivity), processor (502 CPU) and memory (524) is configured to store a process that is executed by the processor as discussed with regard to the method of claim 1. With regard to claims 12-13 and 15-17 the discussions of claims 2-3 and 5-7 apply respectively. With regard to claim 20, the discussion of claims 1 and 11 apply. Lim discloses a computer program product for performing the method discussed in claim 1 (paragraph [0006]). Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of USPNs 2024/0119360 to Lim et al. and 2024/0143645 to Adeel et al. and further in view of the publication titled "Bringing Quantization to the Transfer Learning World" to Hussein. With regard to claims 4 and 14, Lim and Adeel disclose the method as in claim 1, but do not disclose wherein the network calibration de-quantizes the machine learning model to form the domain-adapted model. Hussein teaches a method for transfer learning for training a model from a source domain data, and then updating the model for target domain (pages 6-8 and Fig. 2.1), and further teaches that the domain adapted model is generated or fine-tuned by dequantization of the source trained model (pages 16-19, dequantization of the layers is discussed at section 3.3.2 on page 17). Therefore it would have been obvious to one of ordinary skill in the art before time of filing to use the dequantization taught by Hussein in order to retrain the layers of the target domain model as taught by Lim in order to adapt the model to the target domain. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of USPNs 2024/0119360 to Lim et al. and 2024/0143645 to Adeel et al. and further in view of USPN 2021/0042643 to Hong et al. With regard to claims 8 and 18, Lim and Adeel discloses the method of claim 1, but do not disclose, further comprising: computing, by the device, an accuracy of the domain-adapted model; and providing, by the device, an indication of the accuracy of the domain-adapted model to the user interface. Measuring the accuracy of a machine learning model is a well known endeavor in the art. Hong teaches a system for adapting machine learning models and further teaches evaluating the accuracy and presenting the evaluated accuracy (paragraphs [0009], [0096]-[0100] and [0153] and Fig. 5). Therefore it would have been obvious to one of ordinary skill in the art before time of filing to compute and provide an indication of the accuracy of the domain adapted machine learning model in order to evaluate and improve the machine learning. Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of USPNs 2024/0119360 to Lim et al. and 2024/0143645 to Adeel et al. and further in view of USPN 2024/0211763 to Trusov et al. With regard to claim 9, Lim and Adeel disclose the method as in claim 1, but do not teach wherein performing network calibration on the machine learning model to form the domain-adapted model comprises: computing a scaling factor and zeropoint for the network calibration based on the video data from the target environment. Trusov discloses a system for quantization of a neural network for use in performing image object recognition and video analysis (paragraph [0101]) and further teaches that the fine-tuning of the layers of the neural network model apply quantization and re-quantization which comprises scaling factors and zero-point values. Therefore it would have been obvious to one of ordinary skill in the art to use the scaling factor and zeropoint taught by Trusov in the network calibration of Lim and Adeel in order to better train the domain adapted network. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of USPNs 2024/0119360 to Lim et al. and 2024/0143645 to Adeel et al. and further in view of USPN 2023/0267720 to Marvasti et al. With regard to claim 10, Lim and Adeel discloses the method as in claim 1, but do not disclose wherein the machine learning model is a You Only Look Once (YOLO) model. Marvasti discloses an adapted network in a vehicle image processing system that uses a YOLO architecture (paragraph [0061]). Therefore it would have been obvious to one of ordinary skill in the art before time of filing to use a YOLO model as taught by Marvasti in the vehicle operating environment taught by Lim in order to enable an effective image recognition domain adapted model. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to WESLEY J TUCKER whose telephone number is (571)272-7427. The examiner can normally be reached 9AM-5PM Monday-Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOHN VILLECCO can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WESLEY J TUCKER/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Dec 07, 2023
Application Filed
Mar 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597221
IMAGE PROCESSING APPARATUS AND ELECTRONIC APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12597222
METHOD AND SYSTEM FOR DETERMINING A REGION OF WATER CLEARANCE OF A WATER SURFACE
2y 5m to grant Granted Apr 07, 2026
Patent 12592057
SYSTEM AND METHOD FOR DETECTING AND CLASSIFYING RETINAL MICROANEURYSMS
2y 5m to grant Granted Mar 31, 2026
Patent 12585939
SYSTEMS AND METHODS FOR DISTRIBUTED DATA ANALYTICS
2y 5m to grant Granted Mar 24, 2026
Patent 12586410
Method and Device for Dynamic Recognition of Emotion Based on Facial Muscle Movement Monitoring
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
90%
With Interview (+6.1%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 715 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month