Prosecution Insights
Last updated: April 19, 2026
Application No. 18/257,424

SERVER, METHOD FOR PROCESSING A VIDEO BY MEANS OF THE SERVER, TERMINAL AND METHOD USED BY THE TERMINAL TO AUGMENT THE VIDEO BY MEANS OF AN OBJECT

Final Rejection §102§103
Filed
Jun 14, 2023
Examiner
LE, RONG
Art Unit
2421
Tech Center
2400 — Computer Networks
Assignee
Orange
OA Round
4 (Final)
68%
Grant Probability
Favorable
5-6
OA Rounds
3y 4m
To Grant
98%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
295 granted / 435 resolved
+9.8% vs TC avg
Strong +30% interview lift
Without
With
+29.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
34 currently pending
Career history
469
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
58.2%
+18.2% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
11.1%
-28.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 435 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Miscellaneous Claims pending: 1-17 Claims amended: n/a Claims cancelled: n/a New claims: n/a Response to Arguments Applicant’s arguments, with respect to the rejection(s) of claim(s) 1 have been fully considered. Regarding applicant’s remarks dated 12/01/2025, regarding currently cited references all fail to teach the claim language. Examiner disagree. webster dictionary define: Parameter as “any of a set of physical properties whose values determine the characteristics or behavior of something”. Floury teach a method for enriching an initial video with at least one enrichment object, said initial video including a succession of images acquired by a camera, implemented by a terminal, receiving, by terminal: an initial video including the succession of images acquired by the camera, and parameters, associated with respective images of the initial video, inserting, a first image of succession of images of the initial video, an enrichment object; adapting, at least in images of the succession of images following a first image of the succession of images of said initial video received by the terminal, viewing characteristics for one said enrichment object selected and inserted into said first image, adaptation being performed as a function of parameters of camera associated with images of the initial video, the parameters having been received by the terminal in association with the initial video. (Floury Fig. 1, 3, 4a-4d, P. 9-16, 37, 41, 46-51, 57, 62-66, using a camera within the receiver to monitor the viewer’s gaze at a video the viewer is watching, determine the object/subject within the main content video the viewer is gazing at by analyzing the signals of the camera transmits to the display module the coordinates of the point of the image, locate the coordinates of point of image, gaze point, and other information of the repeated gazed object, (specifically Fig. 3, P. 40, 46-48) which reads on (a function of parameters of camera associated with images of initial video), and getting information from server for elements to insert to the main content in enrichment mode, such as inserting the subject player’s name, club, age, etc. alongside the zoomed/ enrichment mode view of the object/subject as shown in Fig 4B following Fig 4A, which reads on (receiving, by terminal: an initial video including succession of images acquired by camera, and parameters, associated with respective images of the initial video, inserting, a first image of succession of images of the initial video, an enrichment object, … viewing characteristics for one said enrichment object selected and inserted into said first image), adaptation being performed as a function of parameters of said camera associated with images of the initial video, the parameters having been received by the terminal in association with the initial video), similar example is also given from Fig 4C to Fig 4D, with their corresponding cited paragraphs provided). Trenholm further teach parameters, associated with respective images of initial video, of camera that recorded the initial video, the parameters of said camera associated with the images of the initial video, and said camera which acquired image. (Fig. 4 P. 49, 61, metadata such as EXIF data which are part of images include camera calibration, geolocation information, and other camera information, which are part of the image information). Thus, given the broadest reasonable interpretation, the combination of Floury and Trenholm still teach the language of claims 1, 3, 13, 15 as claimed. Applicant is reminded that although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Examiner suggests, further amending the claims using the claimed technology and its background with the cited art to help further differentiate, and move prosecution forward. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 5-6, 11-16, is/are rejected under 35 U.S.C. 103 as being unpatentable over (FR0309256A1) to (Floury) in view of (US 20190138786) to (Trenholm). Regarding claim(s) 1, 3, 13, 15, Floury teach a method for enriching an initial video with at least one enrichment object, said initial video including a succession of images acquired by a camera, implemented by a terminal, receiving, by terminal: an initial video including the succession of images acquired by the camera, and parameters, associated with respective images of the initial video, inserting, a first image of succession of images of the initial video, an enrichment object; adapting, at least in images of the succession of images following a first image of the succession of images of said initial video received by the terminal, viewing characteristics for one said enrichment object selected and inserted into said first image, adaptation being performed as a function of parameters of camera associated with images of the initial video, the parameters having been received by the terminal in association with the initial video. (Floury Fig. 1, 3, 4a-4d, P. 9-16, 37, 41, 46-51, 57, 62-66, using a camera within the receiver to monitor the viewer’s gaze at a video the viewer is watching, determine the object/subject within the main content video the viewer is gazing at, locate the coordinates of point of image, gaze point, and other information of the repeated gazed object, (specifically Fig. 3, P. 40, 46-48) which reads on (a function of parameters of camera associated with images of initial video), and getting information from server for elements to insert to the main content in enrichment mode, such as inserting the subject player’s name, club, age, etc. alongside the zoomed/ enrichment mode view of the object/subject as shown in Fig 4B following Fig 4A, which reads on (receiving, by terminal: an initial video including succession of images acquired by camera, and parameters, associated with respective images of the initial video, inserting, a first image of succession of images of the initial video, an enrichment object, … viewing characteristics for one said enrichment object selected and inserted into said first image) adaptation being performed as a function of parameters of said camera associated with images of the initial video, the parameters having been received by the terminal in association with the initial video), similar example is also given from Fig 4C to Fig 4D, with their corresponding cited paragraphs provided). Floury fail to specifically teach parameters, associated with respective images of initial video, of camera that recorded the initial video, the parameters of said camera associated with the images of the initial video, and said camera which acquired image. Trenholm teach parameters, associated with respective images of initial video, of camera that recorded the initial video, the parameters of said camera associated with the images of the initial video, and said camera which acquired image. (Fig. 4 P. 49, 61, metadata such as EXIF data which are part of images include camera calibration, geolocation information, and other camera information, which are part of the image information). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Floury by parameters, associated with respective images of initial video, of camera that recorded the initial video, the parameters of said camera associated with the images of the initial video, and said camera which acquired image as taught by Trenholm in order to accurately and reliably identify objects within images. Regarding claim(s) 2, Floury in view of Trenholm teach the method, the adaptation of the viewing characteristics for the enrichment object, said camera, the terminal in association with the initial video. Floury further teach takes into account a shooting parameter for an actual object of a scene filmed by said camera, the shooting parameter for the actual object being received by the terminal in association with the initial video. (Floury Fig. 1, 3, 4a-4d, P. 9-16, 37, 41, 46-51, 57, 62-66, using a camera within the receiver to monitor the viewer’s gaze, and determine the object within the main content the viewer is gazing at, locate the coordinates of point of image, gaze point, and other information of the repeated gazed object, (specifically Fig. 3, P. 40, 46-48)). Regarding claim(s) 5, Floury in view of Trenholm teach the method, the adaptation of the viewing characteristics for the enrichment object, said camera, the terminal in association with the initial video, said initial video. Floury further teach is sent as it is acquired. (Floury Fig. 1, 3, 4a-4d, P. 9-16, 35-37, 41, 46-51, 57, 62-66, the content is received and displayed through antenna broadcast and/or network broadcast, or other means). Regarding claim(s) 6, Floury in view of Trenholm teach the method, the adaptation of the viewing characteristics for the enrichment object, said camera, the terminal in association with the initial video, said initial video, the enriched stream. Floury further teach a given image and the parameters associated with that image are synchronized in the enriched stream. (Floury Fig. 1, 3, 4a-4d, P. 9-16, 35-37, 41, 46-51, 57, 62-66, using a camera within the receiver to monitor the viewer’s gaze, and determine the object within the main content the viewer is gazing at, locate the coordinates and other information of the gazed object, and getting information from server for audio/video to insert to the main content and provide with main content in enrichment mode in synch with main content). Regarding claim(s) 11, Floury further teach non-transitory computer readable medium having stored thereon instructions which, when executed by a processor, cause the processor to implement the method of claim 1. (Fig. 1-2, P 45, 69,). Regarding claim(s) 12 Floury further teach on-transitory computer-readable medium having stored thereon instructions which, when executed by a processor, cause the processor to implement the method of claim 3. (Fig. 1-2, P 45, 69,). Regarding claim(s) 14, Floury in view of Trenholm teach the method, the adaptation of the viewing characteristics for the enrichment object, said camera, the terminal in association with the initial video. Floury further teach a communication module to receive an enriched stream composed of the initial video and of said parameters associated with images of the initial video. (Floury Fig. 1, 3, 4a-4d, P. 9-16, 37, 41, 46-51, 57, 62-66, obtaining the gaze point, coordinate information for the main content, wherein it is comminated to server, within server using a communication module to obtain the audio/visual content used for enrichment, for providing to the receiver for display (specifically Fig. 2, P. 28, 42)). Regarding claim(s) 16, Floury in view of Trenholm teach the method, the adaptation of the viewing characteristics for the enrichment object, said camera, the terminal in association with the initial video. Floury further teach terminal is remote from camera which captured initial video. (Floury Fig. 1, P. 37, the camera sensor 15 can be integrated into receiver 1 or separately attached via a cable). Claim(s) 4, 7-8, 17, is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by (FR0309256A1) to (Floury) in view of (US 20190138786) to (Trenholm) in view of (NPL: “Exif data SEI message”) to (EDSM) Regarding claim 4, Floury in view of Trenholm teach the method, in the enriched stream. Trenholm further teach position and calibration parameters of said camera. (Fig. 4 P. 49, 61, metadata such as EXIF data which are part of images include camera calibration, geolocation information). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Floury in view of Trenholm by position and calibration parameters of said camera as taught by Trenholm in order to accurately and reliably identify objects within images. Floury in view of Trenholm fail to specifically teach an insertion of parameters associated with the images of the initial video control data of the enriched stream. EDSM teach an insertion of parameters associated with the images of the initial video control data of the enriched stream. (pg1, abstract, intro, insertion of EXIF data into a video stream and generating an enriched stream) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Floury in view of Trenholm by an insertion of parameters associated with the images of the initial video control data of the enriched stream as taught by EDSM in order to provide more content option/formats. Regarding claim 7, Floury in view of Trenholm teach the method, the parameters of said camera associated with the images of the initial video. Floury in view of Trenholm fail to specifically teach an optimization of an emission of parameters of said camera. EDSM teach an optimization of an emission of parameters of said camera. (pg1, abstract, intro, insertion of EXIF data into a video stream and generating an enriched stream, using SEI message) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Floury in view of Trenholm by an optimization of an emission of parameters of said camera as taught by EDSM in order to provide more content option/formats. Regarding claim 8, Floury in view of Trenholm in view of EDSM teach the method, the parameters of said camera associated with the images of the initial video, an insertion into the enriched stream of at least one said parameter associated with a given image of the initial video. EDSM further teach if and only if this parameter is different from a corresponding parameter associated with previous image. (pg2, 2 Text, EXIF data SEI message semantics, software code showing valid message, multiple images) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Floury in view of Trenholm in view of EDSM by if and only if this parameter is different from a corresponding parameter associated with previous image as taught by EDSM in order to provide more content option/formats. Regarding claim 17, Floury in view of Trenholm teach the method, the terminal, the camera, the initial video associated with the images of the initial video, the terminal receives the initial video and the parameters Trenholm further teach parameters are associated with at least a subset of images of initial video. (Fig. 4 P. 49, 61, metadata such as EXIF data which are part of images include camera calibration, geolocation information). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Floury in view of Trenholm by parameters are associated with at least a subset of images of initial video as taught by Trenholm in order to accurately and reliably identify objects within images. Floury in view of Trenholm fail to specifically teach receives initial video and parameters in an enriched stream. EDSM teach receives initial video and parameters in an enriched stream. (pg1, abstract, intro, insertion of EXIF data into a video stream and generating an enriched stream) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Floury in view of Trenholm by receives initial video and parameters in an enriched stream as taught by EDSM in order to provide more content option/formats. Claim(s) 9, is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by (FR0309256A1) to (Floury) in view of (US 20190138786) to (Trenholm) in view of (NPL: “Exif data SEI message”) to (EDSM) in view of (US 20140059079) to (Oka). Regarding claim 9, Floury in view of Trenholm teach the method, the scene filmed by said camera. Floury in view of Trenholm fail to specifically teach a step of inserting into the enriched stream. EDSM further teach a step of inserting into the enriched stream. (pg1, abstract, intro, insertion of EXIF data into a video stream and generating an enriched stream) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Floury in view of Trenholm by a step of inserting into the enriched stream as taught by EDSM in order to provide more content option/formats. Floury in view of Trenholm in view of EDSM fail to specifically teach a shooting parameter for an actual object of scene filmed by camera. Oka teach a shooting parameter for an actual object of scene filmed by camera. (Fig. 3A, 3B, P. 43, EXIF standard includes shooting parameters such as keyword for an image search, amongst other information of the scene). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Floury in view of Trenholm in view of EDSM by a shooting parameter for an actual object of scene filmed by camera as taught by Oka in order to accurately and reliably identify objects within images. Claim(s) 10, is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by (FR0309256A1) to (Floury) in view of (US 20190138786) to (Trenholm) in view of (NPL: “Exif data SEI message”) to (EDSM) in view of (US 20180276885) to (Singh). Regarding claim 10, Floury in view of Trenholm teach the method, in the enriched stream, the camera, associated with the images. Trenholm further teach position and calibration parameters of said camera. (Fig. 4 P. 49, 61, metadata such as EXIF data which are part of images include camera calibration, geolocation information). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Floury in view of Trenholm by position and calibration parameters of said camera as taught by Trenholm in order to accurately and reliably identify objects within images. Floury in view of Trenholm fail to specifically teach emission of parameters associated with the images of the initial video control data of the enriched stream. EDSM teach emission of parameters associated with the images of the initial video control data of the enriched stream. (pg1, abstract, intro, insertion of EXIF data into a video stream and generating an enriched stream) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Floury in view of Trenholm by emission of parameters associated with the images of the initial video control data of the enriched stream as taught by EDSM in order to provide more content option/formats. Floury in view of Trenholm in view of EDSM fail to specifically teach repeated. Singh teach repeated. (P. 6, 62, 66, 75,) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Floury in view of Trenholm in view of EDSM by repeated as taught by Singh in order to provide an accurate general-purpose procedure for silhouette extraction. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RONG LE whose telephone number is (571)270-7637. The examiner can normally be reached M-F (9 am - 6pm). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan Flynn can be reached on 5712721915. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RONG LE/Primary Examiner, Art Unit 2421
Read full office action

Prosecution Timeline

Jun 14, 2023
Application Filed
Jun 14, 2023
Response after Non-Final Action
Sep 13, 2024
Non-Final Rejection — §102, §103
Dec 16, 2024
Response Filed
Jan 14, 2025
Final Rejection — §102, §103
Mar 25, 2025
Examiner Interview Summary
Mar 25, 2025
Applicant Interview (Telephonic)
Apr 21, 2025
Response after Non-Final Action
Jun 23, 2025
Request for Continued Examination
Jun 29, 2025
Response after Non-Final Action
Aug 21, 2025
Non-Final Rejection — §102, §103
Dec 01, 2025
Response Filed
Jan 15, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598657
Projection Connection Control Method And Electronic Device
2y 5m to grant Granted Apr 07, 2026
Patent 12581141
SYSTEM AND METHOD FOR CUSTOMISATION OF MEDIA INFORMATION
2y 5m to grant Granted Mar 17, 2026
Patent 12574589
Signaling for Picture In Picture In Media Container File and In Streaming Manifest
2y 5m to grant Granted Mar 10, 2026
Patent 12574604
PROGRAM GUIDE WITH SPOILER PREVENTION
2y 5m to grant Granted Mar 10, 2026
Patent 12574583
DISPLAY APPARATUS AND METHOD FOR PERSON RECOGNITION AND PRESENTATION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
68%
Grant Probability
98%
With Interview (+29.7%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 435 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month